entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 15
199
| authors
list | primary_category
stringlengths 5
18
| categories
list | text
stringlengths 1
461k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2307.02214v2
|
20230705114229
|
Inflation in simple one-loop effective potentials of perturbative quantum gravity
|
[
"A. Arbuzov",
"D. Kuznetsov",
"B. Latosh",
"V. Shmidt"
] |
gr-qc
|
[
"gr-qc",
"hep-th"
] |
Stronger Quantum Speed Limit For Mixed Quantum States
Arun Kumar Pati
August 1, 2023
=====================================================
We study inflation in scalar-tensor perturbative quantum gravity driven by a one-loop effective potential. We consider effective potentials generated by three models. The first model describes a single scalar field with a non-vanishing mass. The second model describes a massless scalar field with non-minimal coupling to the Einstein tensor. The third model generalises the Coleman-Weinberg model for the gravitational case. The first model can be consistent with the observational data for N∼ 70 e-foldings. The second model can be consistent with the observational data for N ∼ 40 e-foldings. We did not find parameters that make the generalised Coleman-Weinberg model consistent with the observational data. We discuss the implications of these results and ways to improve them with other terms of effective action.
§ INTRODUCTION
The inflationary phase of the universe's expansion is essential to cosmological evolution. It resolves several serious problems that otherwise arise in cosmological models <cit.>. Contemporary empirical data on the cosmic microwave background radiation and the large-scale structure of the universe provide some constraints on the inflationary parameters <cit.>. However, they are still insufficient to select a single model that best fits the full range of data. As a result, there is a large number of inflationary models constructed on very different foundations <cit.>.
One of the most promising approaches is considering inflation as an effect caused by quantum corrections. Several models implement this approach directly or indirectly. Perhaps the best known is the Starobinsky inflation <cit.>. This model modifies general relativity by including a quadratic curvature term that drives inflation. This model can be mapped onto a scalar-tensor model with a non-trivial scalar field potential. In this parameterisation, inflation is driven by the well-known slow-roll mechanism <cit.>. The squared curvature term is strongly related to quantum gravitational corrections. At the one-loop level, quantum gravity universally generates operators that are quadratic in curvature <cit.>. On the cosmological background, these operators are reduced to full derivatives, except for the curvature-squared operator, which is present in the Starobinsky model. This example further motivates us to explore inflation's roots in pure quantum effects.
Effective action formalism provides a tool for constructing inflationary models from quantum field theory considerations. The formalism itself, as we will discuss below, constructs an effective action Γ which depends on the vacuum expectation values of a scalar (or any other) field so that it describes an emergent classical dynamics generated by a quantum system <cit.>. Effective action can be calculated perturbatively by a resummation of one-particle irreducible diagrams. This paper only considers the leading contribution to effective action at the one-loop level, so that we will study one-loop effective actions.
An effective action contains both operators with derivatives and operators without derivatives. The part of the action that does not contain derivatives is usually called the effective potential. As mentioned above, inflation can be driven by a scalar field potential. Consequently, effective one-loop scalar field potentials provide models capable of driving inflation. We should note that inflation can also be driven by a non-minimal scalar field coupling to gravity, which is also present in one-loop effective actions. Discussion of these cases is beyond the scope of this paper.
The main goal of this paper is to investigate simple one-loop effective potentials obtained in perturbative quantum gravity and to determine whether they alone can provide inflationary models consistent with the observational data. We consider models studied in papers <cit.>. These models describe effective potentials generated by the simplest scalar-tensor models. We show that all discussed models can drive inflation but are inconsistent with the observational data for the simplest case of N=60 e-foldings. The first model favours a large number of e-foldings N∼ 70, while the second model favours a smaller number N ∼ 40. The third model is strongly inconsistent with the observational data, and an agreement cannot be achieved for a wide range of model parameters.
The paper is organised as follows. In Section <ref>, we discuss the models under study in more detail. We cover the basics of perturbative quantum gravity and effective action. We then discuss the effective potentials calculated in <cit.>. In Section <ref>, we discuss inflation parameters that can be calculated in models with such effective potentials and present the results of such calculations. In Section <ref>, we discuss the physical implications of these results and the further development of this approach.
§ ONE LOOP EFFECTIVE POTENTIALS
We consider models based on perturbative quantum gravity. Here we briefly review the formalism; a more detailed discussion presents in <cit.>. The central premise of quantum gravity is an association of weak quantum gravitational effects with small metric perturbations propagating in flat spacetime. The following perturbative expansion gives the total metric:
g_μν = η_μν + κ h_μν .
Here κ is the gravitational coupling related to Newton's constant κ^2 = 32 π G. This is a finite expansion, but it produces an infinite series in κ for the Einstein-Hilbert action. The path integral formalism gives a way to compute matrix elements with the generating functional:
𝒵[J] = ∫𝒟[g] exp[ i 𝒜[g] ] = ∫𝒟[η + κ h] exp[ i 𝒜[η + κ h] + i J · h ]
= ∫𝒟[h] exp[ i h_μν𝒟^μναβ□ h_αβ + i J · h + 𝒪(κ) ].
Here 𝒜 is the microscopic action defining the structure of the theory, J are formal external currents, and we have omitted higher order terms describing the graviton interaction. This functional generates matrix elements by taking an appropriate derivative:
⟨ 0 | h_μν(x) ⋯ h_αβ(y) | 0 ⟩ = 1𝒵[J]δδ i J^μν(x)⋯δδ i J^αβ(y) 𝒵[J] |_J=0.
The effective action is constructed as follows. One starts with the connected generating functional W:
𝒵[J] = exp[ i W[J]].
Derivatives of the generating functional 𝒵 generate matrix elements corresponding to all possible Feynman graphs, but the connected generating functional W only generates matrix elements corresponding to connected graphs. The effective action Γ is a Legendre transformation of W:
Γ[φ] = W[J] - φ · J.
This quantity is called effective action for two reasons. First, the derivative of the effective action by J shows that φ is the classical value of the quantum scalar field ϕ:
φ = δδ J W[J] = 1𝒵[J] δδ i J 𝒵[J] = ⟨ 0 |ϕ| 0 ⟩ .
Second, the derivative of the effective action by φ vanishes when there are external currents J:
δΓδφ = - J J → 0→ 0.
These properties match the properties of a classical action describing a classical system, which allows one to consider the effective action as a proper action describing the classical dynamics of a given quantum system.
This paper considers the effective potentials calculated in the papers <cit.>. These models provide minimal modifications of general relativity, which makes them the simplest natural candidates. As mentioned above, we will discuss only the effective potential of the scalar field rather than the complete effective action for the following reasons. First, inflation can be driven either by a scalar field potential or by the non-minimal coupling of the scalar field to gravity. The one-loop effective action for most scalar-tensor models includes both the effective potential and the non-minimal coupling. It is essential to study their influence separately to distinguish their roles. Second, in models that allow both a scalar field potential and a non-minimal coupling, two independent inflationary phases can occur <cit.>. The non-minimal coupling can have a non-trivial influence on the inflationary phase driven by the scalar field potential. Consequently, it is crucial to evaluate inflationary parameters without non-minimal couplings to see if they can improve the consistency with the empirical data. For this reason, we omit any discussion of non-minimal couplings generated at the one-loop level and focus only on the one-loop effective potential.
The following microscopic action gives the first model:
𝒜_I = ∫ d^4 x √(-g)[ -2κ^2 R - 12 ϕ(□ + m^2) ϕ].
It is the simplest model describing a single massive scalar field minimally coupled to gravity. Without gravity, the model cannot develop an effective potential within standard quantum field theory because the scalar field lacks the interaction sector. Within perturbative quantum gravity, it develops the following effective potential at the one-loop level:
V_I = -m^4 ln2/64 π^2 + m^2/2[1+m^2 κ^2/32 π^2(1+ln4) ] φ^2
+m^4128 π^2( 1 - 2 κ^2 φ^2 -√( 1 - 4 κ^2 φ^2)) ln[1-√(1 - 4 κ^2 φ^2)]
+m^4128 π^2( 1 - 2 κ^2 φ^2 + √( 1 - 4 κ^2 φ^2)) ln[1+√(1 - 4 κ^2 φ^2)] .
Before renormalisation, the potential had a single ultraviolet divergence in the mass term. It is renormalised at φ=0 so that the classical field φ has the same mass as the quantum field ϕ. The potential has a single local minimum at φ=0 and local maxima deep in the Planck region. The equation defining the positions of these maxima is transcendental, so we could not find an explicit formula defining their position. The potential increases monotonically from φ=0 to the local maxima, after which it decreases indefinitely. The position of the maxima is far in the Planck region, so they naturally mark the most general applicability of the model.
The following action gives the second model
𝒜_II = ∫ d^4 x√(-g)[ - 2κ^2 R - 12 ϕ □ ϕ - λ2 R ϕ^2 ].
Here λ is a dimensionless coupling. The new non-minimal coupling is the only one that describes the three-particle interaction, contributes to the effective potential and does not introduce higher derivative terms in the field equations. We exclude the scalar field mass term to separate its contribution from this non-minimal coupling. The model produces the following effective potential after renormalisation:
V_II = m^23 κ^2 λ^2 ln[1 + 32 λ^2 κ^2 φ^2 ].
This potential has a single minimum at φ=0 and grows monotonically with φ.
Unlike the previous case, the potential has infinite UV-divergent terms parameterised with a single coupling. This provides the basis for a consistent renormalisation of this model. Although we have to introduce an infinite number of counterterms, all of them are defined by a single constant. The potential is renormalised at φ=0, where its mass is set to a finite value m. It should also be noted that the UV divergence discussed was a power-law divergence regularised in a cut-off scheme. Consequently, the potential is expected to be sensitive to the UV structure of the theory. In other words, the mass of the classical field is highly sensitive to possible UV extensions of the theory.
Finally, the third model is a generalisation of the well-known Coleman-Weinberg model <cit.>:
𝒜_III = ∫ d^4 x √(-g)[ -2κ^2 R - 12 ϕ (□ + m^2) ϕ - λ4 ϕ^4].
The model is important because it contains a simple scalar field self-interaction, contributing to the effective potential. Without gravity, the model has good renormalisation behaviour. Because of this, the corresponding effective potential will contain two competing contributions, and it would be possible to determine their influence on inflationary scenarios. The corresponding leading order one-loop corrections after renormalisation are given by the following expression:
V_III = ln[ 1 + λ/2 φ^2m^2]{m^464π^2 + m^22 φ^2 λ-2 m^2κ^232π^2 + λ4! φ^4 3 λ -8 m^2κ^232π^2 - κ^26! φ^6 5 λ^28π^2}
+ m^22(1-λ64π^2) φ^2 + λ4!( 1 - 3 (3λ - 8 κ^2m^2)64π^2)φ^4
+ g6!( 1 - 15 λ^232π^2 λ - 2 m^2κ^2g m^2) φ^6 .
This case is similar to the first model. First, the potential has a single local minimum at φ=0, and it grows with φ until it reaches its maximum and then approaches -∞. Second, the potential has ultraviolet divergences in the mass and φ^4 interaction terms. These divergences can be subtracted by the standard scheme and renormalised to the corresponding values of the microscopic action. In contrast to the first case, the potential also has a divergence in the φ^6 interaction term, which is missing in the microscopic action. Nevertheless, we subtract the corresponding divergence and normalise the potential to a new unknown coupling g with mass dimension -2.
In summary, the three models discussed provide minimal modifications of general relativity with a scalar field. The first model introduces a single scalar field minimally coupled to gravity, the second introduces a scalar field with non-minimal coupling, and the last introduces a minimally coupled scalar field with self-interaction. The simplicity of these models gives them a reasonable degree of universality because one can expect similar behaviour in various other models. For example, the contribution to the effective potential generated in the first model will be present in any model containing a massive scalar field. The corresponding inflation scenarios, in turn, provide a tool for making reasonable assumptions about inflation parameters in a broad class of models.
§ INFLATIONARY PARAMETERS
This paper only considers inflation within scalar-tensor gravity models with minimal coupling and scalar field potential. A general action of such a theory is given by the following:
S = ∫ d^4 x √(-g) [ -2κ^2 R + 12 g^μν ∇_μφ ∇_νφ - V(φ)].
For simplicity, we will only consider a spatially flat Friedmann-Robertson-Walker universe described by the following metric:
ds^2 = dt^2 - a^2(t) [ dx^2 + dy^2 + dz^2 ].
We consider only uniform scalar fields that depend only on time coordinate φ=φ(t). This gives us the familiar field equations:
12κ^2 H^2 = 1/2 φ̇^2+V(φ),
φ̈ + 3 H φ̇ + V'(φ)=0 .
The model enters the inflationary expansion when the Hubble parameter H=ȧ/a is constant, which is equivalent to the following conditions on the Hubble parameter <cit.>:
Ḣ(t)H^2(t) ≪ 1 , Ḧ(t)H^3(t) ≪ 1.
In the simple inflationary model discussed, they are reduced to the slow roll conditions for the potential:
( V'(φ)V(φ))^2 ≪ 1 , V”(φ)V(φ) ≪ 1.
In turn, one can define the so-called slow roll parameters
ε =2/3 κ^2 ( V'(φ)V(φ))^2 , η = 4/3 κ^2|V”(φ)V(φ)|.
The model enters the slow roll regime when both η and ε are small and exists in the slow roll regime then one of these parameters becomes larger than 1. Consequently, the time t_end at which inflation ends are defined by the corresponding condition:
η(t_end) =1 or ε(t_end) =1.
The inflation must last for at least 50 e-foldings to be consistent with the observational data. In the slow roll approximation, the number of e-foldings is given by the following analytical expression:
N (t_end, t_init) =-κ^24 ∫_φ_init^φ_endV(φ)/ V'(φ) d φ , φ_init=φ(t_init), φ_end=φ(t_end) .
We use this condition to determine the time t_init at which inflation began. We mainly focus on the case of N=60 e-foldings but also consider some deviations to study if the discussed models can be consistent with the observational data.
We use the tensor-to-scalar ratio r and the scalar tilt of the scalar perturbation spectrum n_s to perform the verification by the empirical data. In the class of models discussed, these parameters are given by the explicit formulae <cit.>:
r = 48 ε, n_s- 1 = 6 ( η - 3 ε ).
These parameters are well constrained by the present observational data <cit.>:
r< 0.032, n_s=0.9663 ± 0.0041 .
This allows us to check whether these models are consistent with the observational data.
§.§ First model
Let us start with the first model with the effective potential (<ref>). The following sophisticated expressions give the slow roll parameters:
ε = 32 φ^23 κ^2 (1- 4 κ^2 φ^2)[ {κ^2 m^2 ( 1 + √( 1 - 4 κ^2 φ^2 )) ln( 1 + √( 1 - 4 κ^2 φ^2))
- κ^2 m^2 ( 1 - √( 1 - 4 κ^2 φ^2)) ln( 1 - √( 1 - 4 κ^2 φ^2)) - √( 1 - 4 κ^2 φ^2)(κ^2 m^2 ln 4 + 32 π ^2) }/
{ 64 π^2 ϕ ^2 + m^2 ( κ^2 φ^2 ( 2 + ln 16 )-ln 4 ) + m^2 ( 1 - 2 κ^2 φ^2 + √( 1 - 4 κ^2 φ^2)) ln( 1 + √( 1 - 4 κ^2 φ^2))
+ m^2 ( 1 - 2 κ^2 φ^2 - √(1 - 4 κ^2 φ^2)) ln( 1 - √(1 - 4 κ^2 φ^2 )) }]^2 ,
η = 163 κ^2 (1 - 4 κ^2 φ^2)^3/2[ {κ^2 m^2 ( 1 + ( 1 - 4 κ^2 φ^2)^3/2) ln( 1 - √( 1 - 4 κ^2 φ^2))
- κ^2 m^2 ( 1 - ( 1 - 4 κ^2 φ^2)^3/2) ln( 1 - √( 1 - 4 κ^2 φ^2))
+ √( 1 - 4 κ^2 φ^2)( - 32 π^2 (1 - 4 κ^2 φ^2 ) + κ^2 m^2 ( 4 κ^2 φ^2 ( ln 4 - 2 ) - ln 4 ) ) }/
{ m^2 ( ln 4 - 2 κ^2 φ^2 ( 1 + ln 4 ) ) - m^2 ( 1 - 2 κ^2 φ^2 + √( 1 - 4 κ^2 φ^2)) ln( 1 + √( 1 - 4 κ^2 φ^2))
- m^2 ( 1 - 2 κ^2 φ^2 - √( 1 - 4 κ^2 φ^2)) ln( 1 - √( 1 - 4 κ^2 φ^2)) - 64 π^2 φ^2 }] .
Inflation can occur only when both of these parameters are small. Because of the sophisticated form of the potential and slow roll parameters, one can hardly draw meaningful conclusions about inflation based on these expressions alone. We analyse the parameter space of the model to draw comprehensive conclusions.
We present plots showing the region of the model's (φ,m) parameter space where the slow roll parameters are small in Figure <ref>. In these plots dimensional parameters φ and m are given in the κ units, namely, κ φ and κ m are the corresponding values. In the following analysis, we will also use dimensional variables for the sake of simplicity:
φ̃ = κ φ , m̃ = κ m .
The suitable region of the parameter space has a narrow branch with small slow roll parameters. Nevertheless, a realistic inflationary scenario can hardly occur because the branch is too narrow to embed 60 e-foldings. Moreover, the branch takes the region with small values of field φ̃ and large values of masses m̃ > 10, which makes the setup marginal. Because of this, we will not study scenarios taking place there.
The presence of ln-functions in the potential (<ref>) makes obtaining analytic expressions for most inflationary scenarios impossible. We will use numerical calculations to study inflation in most cases. However, there are two limiting cases where analytic expressions can be obtained. These are the massless limit m→ 0 and the heavy mass limit m→∞. In both of these limits, the effective potential losses its physical meaning: in the massless limit it vanishes, and in the heavy mass limit it becomes infinite. On the contrary, expressions for the slow roll parameters remain finite and well-defined. This does not imply neither that inflation can occur without a potential nor that it can occur with an infinite potential. It only shows that the inflationary parameters show a universal behaviour with a weak mass dependence in these limiting cases.
We begin with the massless limit. In that limit, slow roll parameters read:
ε = η = 83 φ̃^2 .
In turn, the expression defining the number of e-foldings is
N (t_end, t_init) = - 18 ∫_φ̃_init^φ̃_end φ̃ d φ̃.
These expressions match the case of a model with φ^2 potential, so we obtain the well-known relations between the inflationary parameters r, n_s and the number of e-foldings N:
r = 486 N + 1 , n_s = 6 N -116 N+1 .
Consequently, this limiting case cannot fit the data because it develops n_s consistent with observation for 53 ≲ N ≲ 67, but r is consistent for N ≳ 250.
For any given N, they cannot simultaneously get both r and n_s to be consistent with the observational data. The scalar tensor tilt is consistent with observations for 52.7434 < N < 67.4009, while the tensor-to-scalar ratio is consistent for N>249.833. Figure <ref> presents the plot showing the model's behaviour.
We proceed with the heavy mass limit. Obtaining explicit analytic expressions for the slow roll parameters for this case is possible. However, they are sophisticated and can hardly be analysed directly, so we omit them for briefness. They also contain multiple ln functions, so obtaining analytic expressions for boundary field values and the number of e-folding is impossible. We study the model with numerical methods and found that it experiences a similar behaviour. Namely, for 56 ≲ N ≲ 70 e-foldings, the model generates n_s consistent with the observational data, but the tensor-to-scalar ratio remains large. The corresponding plot is given in Figure <ref>.
Finally, we proceed with the analysis for an arbitrary scalar field mass. We numerically study cases of N=50, 60, 64, and 70 e-foldings. The results are shown in Figure <ref>. The model is inconsistent with the observational data for the most natural case of N=60 e-folding. It develops the correct values of the scalar spectrum tilt n_s for a large band of masses, but the tensor-to-scalar ratio is too big even for large masses. The case of N=50 e-foldings shows even more substantial disagreement, as the model develops a suitable scalar spectrum tilt only a narrow band of large masses m. This indicates that the model favours the large number of e-foldings. Cases of N=64 and N=70 e-foldings support that claim. Namely, the N=70 case is consistent with the observational data for 6.63 ≲m̃≲ 6.76. In terms of the Planck mass m_P = √(ħ c / G_N) this corresponds to 0.66 m_P≲ m ≲ 0.67 m_P.
We conclude that the model (<ref>) is consistent with the observational data for N ≳ 64 e-foldings and 0.66 m_P≲ m ≲ 0.67 m_P. We will study this model in further publications and discuss its implications in Section <ref>.
§.§ Second model
For the effective potential (<ref>) read the slow roll parameters:
ε = 6 λ^4 κ^2 φ^2 ( 1 + 3/2 λ^2 κ^2 φ^2 )^2 ln^2 ( 1 + 3/2 λ^2 κ^2 φ^2 ) , η = 8 λ^2 ( 1 - 3/2 λ^2 κ^2 φ^2 ) ( 1 + 3/2 λ^2 κ^2 φ^2 )^2 ln( 1 + 3/2 λ^2 κ^2 φ^2 ) .
The mass parameter m in (<ref>) appears as a multiplier and does not enter the ln function, so it does not enter the slow roll parameters and does not affect inflation. The value of this mass parameter m is sensitive to the UV structure of the theory since it is obtained by the cut-off regularisation, as discussed above. The fact that m does not enter any observable parameters suggests that this UV sensitivity is negated.
In full analogy with the previous case, it is useful to study limiting cases first. The model has two suitable limits: the decoupling limit λ→ 0 and the strong coupling limit λ→∞. In the decoupling limit, the model potential remains well-defined and completely reduces to the φ^2-potential:
V_IIλ→ 0→m^2 φ^22 .
Once again, this case is well-known, and we will not discuss it further. The strong coupling limit exhibits a more interesting behaviour because both potential and slow roll parameters vanish in that limit. In full analogy with the previous case, the model develops a universal behaviour with a weak mass dependence. Moreover, in that limit, all most all initial value of the scalar field is suitable for inflation. In that sense, the model favours large values of λ because they make more room for inflation.
Let us proceed with further study of inflationary parameters. The simple form of the effective potential (<ref>) allows one to obtain an analytical solution for the integral present in the number of e-foldings:
- κ^24 ∫V(φ)V'(φ) dφ = 124 λ^2 [ 3/2 λ^2 κ^2 φ^2 + Li_2(-3/2 λ^2 κ^2 φ^2) - ( 1 + 3/2 λ^2 κ^2φ^2) ln( 1 + 3/2 λ^2 κ^2φ^2) ] .
Here Li_2 is the dilogarithm. Although it is possible to obtain this analytic formula, it is still impossible to obtain analytic expressions for the scalar field at the beginning and end of inflation due to the presence of transcendental functions.
The area of the model parameter space where the inflationary parameters are small is given in Figure <ref>. In contrast with the previous case, the region where inflation is allowed has no peculiarities. Inflation parameters for this model for the cases of N=40, 50, and 60 e-foldings are shown in Figure <ref>. The model is inconsistent with the observational data for the natural case of N=60 e-foldings. Although it admits the correct scalar spectrum tilt for small values of λ, it develops a large tensor-to-scalar ratio. In contrast with the previous case, the model favours the small number of e-foldings and becomes consistent with the data for N=40 e-foldings. At the same time, it also requires a large value of the non-minimal coupling λ∼ 1.
We conclude that this model is inconsistent with the observational data because it favours a smaller number of e-foldings N < 50; it may not last long enough to resolve the horizon and the flatness problems. The model favours the strong coupling regime λ∼ 1, so it operates at the border of effective field theory applicability. In the strong coupling regime, the original calculations within perturbative quantum gravity can lose their applicability; consequently, the effective potential may no longer be applicable. We discuss the implications of these results further in Section <ref>.
§.§ Third model
Finally, we will consider a model with the effective potential (<ref>). The following expressions give the slow roll parameters:
ε = 83 κ^2 1( 2 m^2 + λ φ^2 )^2 [ {φ^7 ( 96 π^2 g λ m^2 + 5 λ^3 ( 14 κ^2 m^2 - 9 λ)) + 23040 π^2 m^6
+ 6 φ^5 ( 32 π^2 g m^4 + 5 λ^2 m^2 ( - 9 λ + 22 κ^2 m^2 + 64 π^2 ) ) φ
+ 120 λ m^4 φ^3 ( - 3 λ + 6 κ^2 m^2 + 128 π^2 )
- 60 ln( 1 + λ φ^22 m^2) φ(2 m^3 + λ m φ^2 )^2 ( λ(κ^2 φ^2 - 3 ) + 6 κ^2 m^2 ) }/
{φ^6 ( 32 π^2 g m^2 + 15 λ^2 ( 2 κ^2 m^2 - λ)) + 180 ( 64 π^2 - λ) m^4 φ^2
+ 15 λ m^2 φ^4 ( - 9 λ + 24 κ^2 m^2 + 64 π^2 ) - 10 ln( 1 + λ φ^22 m^2)
×( 36 m^6 ( 2 κ^2 φ^2 - 1 ) + 12 λ m^4 φ^2 ( 2 κ^2 φ^2 - 3 ) + λ^2 m^2 φ^4 ( 2 κ^2 φ^2 - 9 ) ) }]^2,
η = - 403 κ^21(2 m^2 + λ φ^2)^2 {φ^8 ( 96 π^2 g λ^2 m^2 + λ^4 ( 46 κ^2 m^2 - 45 λ))
+ 8 φ^6 ( 48 π^2 g λ m^4 + λ^3 m^2 ( - 27 λ + 44 κ^2 m^2 + 144 π^2 ) )
+ φ^4 ( 384 π^2 g m^6 + 36 λ^2 m^4 ( - 9 λ + 22 κ^2 m^2 + 192 π^2 ) ) + 9216 π^2 m^8
+ 144 λ m^6 φ^2 ( - λ + 2 κ^2 m^2 + 96 π^2 ) - 12 ln( 1 + λ φ^22 m^2)
×( 2 m^3 + λ m φ^2 )^2 ( λ^2 φ^2 ( 5 κ^2 φ^2 - 9 ) + 12 κ^2 m^4 + 6 λ m^2 ( 4 κ^2 φ^2 - 1 )) }/
{φ^6 ( 15 λ^2 ( λ - 2 κ^2 m^2 ) - 32 π^2 g m^2 ) + 180 ( λ - 64 π^2 ) m^4 φ^2
- 15 λ m^2 φ^4 ( - 9 λ + 24 κ^2 m^2 + 64 π^2 ) + 10 ln( 1 + λ φ^22 m^2)
×( 36 m^6 ( 2 κ^2 φ^2 - 1 ) + 12 λ m^4 φ^2 ( 2 κ^2 φ^2 - 3 ) + λ^2 m^2 φ^4 ( 2 κ^2 φ^2 - 9 ) ) } .
Figures <ref> and <ref> contain plots showing regions of the model parameter space where these slow roll parameters are small.
The model admits two important liming cases: the massless case m→ 0 and the decoupling case λ→ 0, g → 0. In the massless case, the slow roll parameters read
ε = 24φ̃^2 , η = 40φ̃^2 .
Here we use the same definition of the dimensionless scalar field φ̃. This gives the scalar field value at the end of inflation
φ̃_end = 2 √(10).
The initial value of the scalar field is related to the number of e-foldings N:
φ_init = √(48 N + 40) .
Which gives the following relations for the inflationary parameters:
r = 1446 N + 5 , n_s = 6 N -19 6 N + 5 .
In this limit, the model is inconsistent with the observational data. Namely, the tensor-to-scalar ratio is small enough for N ≳ 750 while the scalar spectrum tilt is consistent for 110 ≲ N ≲ 134. In the decoupling limit λ→ 0, g → 0, the model potential is reduced to φ^2 potential if full similarity with previous cases. Plots showing the model behaviour in these limiting cases are given in Figure <ref>.
These limiting cases have a certain similarity with the first model. For both models, the limiting cases are inconsistent with the data. Nonetheless, the first model is consistent with the data for some intermediate parameter values. Therefore, the discussed model (<ref>) can only be consistent with the data for some intermediate parameter values. The search for such a parameter region is a challenging problem because of the structure of parameter space. Figures <ref> and <ref> show that the acceptable region of parameter space is split into two pieces. Still, it is possible to constrain the problem reasonably and to perform some meaningful analysis.
We base our treatment on the following premises. Firstly, we are only interested in a reasonably small range of masses κ m <10. These masses lie below the Planck mass m_P, so they do present immediate interest. However, it may be possible to justify the usage of models with the mass of a scalar field larger than the Planck mass. We believe this case is irrelevant within the effective field theory treatment where the discussed effective potential is obtained. Secondly, we assume that the initial value of the scalar field should be large. Following the well-known description of inflation <cit.>, it is safe to assume that the initial values of the scalar field lie in the Planck region, as at the beginning of inflation, the universe is created from a high energy quantum state. These two considerations suggest that the inflation shall occur in the region which in Figure <ref> is situated in the bottom right corner.
The structure of that region favours two regimes: λ =0 and λ≫ 1, g̃∼ 0 where g̃ is the dimensional coupling g in κ units. In the λ=0 case, the region is not split into two parts for any g̃ values and can be analysed easily. The region is split into two parts if λ∼ 0 and g∼ 0. However, the desired region enforces extremely large initial and final scalar field values, so it can hardly be relevant. The situation is similar for λ≫ 1 and g̃≫ 1. The only remaining case, λ≫ 1, g̃∼ 0, admits scenarios with reasonably large initial and final values of the inflationary parameters.
Let us start with the λ =0 case. It is inconsistent with the observational data. Figure <ref> presents plots for g=1 and g=10 cases. These plots show that the inflationary parameters approach the desired region for smaller values of masses. At the same time, as discussed above, the massless limit is also inconsistent with the data by a significant margin. Thus, we conclude that this case can hardly be brought in consistence with the data. The other case, λ≫ 1, g̃ =0, is also inconsistent with the data. Figure <ref> shows inflationary parameters for different masses with λ = 1 and λ = 5.
Based on this data, there is no reason to believe that the model admits a set of parameters consistent with the observational data. Let us highlight one more time that the chosen region of parameters corresponds to the most natural physical setup. Although we cannot exclude the possibility of tuning this model into an agreement with the data, such a set of parameters would be highly marginal. We discuss these results in Section <ref>.
§ DISCUSSION AND CONCLUSION
We explored the possibility of describing inflation as an effect dynamically generated by quantum effects at the one-loop level. We studied three simplest models proposed in papers <cit.>.
The first model describes a single scalar field of non-vanishing mass minimally coupled to general relativity. The model develops an effective potential (<ref>). Plots of its inflation parameters for N=50, 60, 64, and 70 e-foldings are shown in Figures <ref>. The model is consistent with the observational data for N ≳ 64 e-foldings. The corresponding suitable values of the mass lie beyond the Planck scale. Namely, for N=70 the allowed mass rage is 0.66 m_P≲ m < 0.67 m_P.
The second model describes a single massless scalar field non-minimally coupled to the Einstein tensor. The model develops an effective potential (<ref>), generating the scalar field mass. The generated mass parameter is UV sensitive because the effective potential is evaluated in the cut-off regularisation scheme. In other words, the new mass parameter depends on the cut-off scale of the model. This UV dependence is completely negated because the mass parameter does not enter any expression for the inflation parameters and does not affect inflation. Plots of the inflation parameters of the model for N=40, 45, 50, and 60 e-foldings are shown in Figure <ref>. The model is disfavoured because it is consistent with the data only for a small number of e-foldings and in the strong coupling regime. Because of the small number of e-foldings, the model may not be capable of resolving the horizon and the flatness problems. Because the model requires a strong coupling regime, the expression for the effective potential calculated within perturbative quantum gravity may no longer be applicable.
The last model describes a generalisation of the Coleman-Weinberg model for the gravitational case. The model develops an effective potential (<ref>). It has three parameters, and only one of them, the six-particle coupling g, is UV sensitive. The model is inconsistent with the observational data for the realistic range of parameter values. Plots of the inflation parameters of this model are shown in Figures <ref>.
We interpret these results in the following way. As discussed in the introduction, these models present explicit examples of inflationary scenarios driven purely by quantum effects. They are similar to Starobinsky inflation, which uses higher curvature terms to describe inflationary expansion. The models discussed in this paper use a similar approach to inflation, but use the scalar field effective potential to drive inflation. Each model provides a separate piece of information about the viability of such an approach to inflation.
The first model provides the best example of inflation driven by pure quantum effects. The model relies on the simplest possible model of quantum scalar-tensor gravity, yet it can generate an effective potential consistent with the observations. The model has a single free parameter which is the scalar field mass. In turn, the mass is well-constrained for the model to be consistent and lies below the Planck scale.
The second model shows the role of possible non-minimal coupling to gravity. The model can be consistent with the data but for marginal values of parameters. The model favours a small number of e-foldings and a strong coupling regime. It is safe to conjecture that if such a non-minimal coupling is introduced at the level of microscopic action together with the mass term, the resulting model will improve the first model.
The third model shows that the scalar field potential presented in the microscopic action plays the leading role. The Coleman-Weinberg model on its own is inconsistent with the Planck data <cit.>. Its generalisation considered in this paper admits a larger space of parameters, but it is still inconsistent with the observations for the natural choice of parameters. This case shows that quantum gravitational corrections can hardly significantly improve a potential initially inconsistent with the data.
Lastly, the following comment is due to terms omitted from the effective action. The full effective action includes the effective potential and non-potential terms. The most well-known is the non-minimal kinetic coupling between a scalar field and the Einstein tensor. It is universally generated at the one-loop level <cit.> and can also drive inflation <cit.>. Therefore, it is essential to develop the discussed approach to inflation and consider models with non-minimal couplings. This development will be done in further publications.
§ ACKNOWLEDGEMENT
The work by AA and DK was supported by RSF grant 22-22-00294. BL work was supported by the Institute for Basic Science Grant IBS-R018-Y1.
unsrturl
|
http://arxiv.org/abs/2307.02704v1
|
20230706004438
|
Large Myr-old Disks are Not Severely Depleted of gas-phase CO or carbon
|
[
"Ilaria Pascucci",
"Bennett N. Skinner",
"Dingshan Deng",
"Maxime Ruaud",
"Uma Gorti",
"Kamber R. Schwarz",
"Edwige Chapillon",
"Miguel Vioque",
"James Miley"
] |
astro-ph.SR
|
[
"astro-ph.SR",
"astro-ph.EP",
"astro-ph.GA"
] |
Ilaria Pascucci
[email protected]
0000-0001-7962-1683]Ilaria Pascucci
Lunar and Planetary Laboratory, The University of Arizona, Tucson, AZ 85721, USA
0009-0000-9731-2462]Bennett N. Skinner
Lunar and Planetary Laboratory, The University of Arizona, Tucson, AZ 85721, USA
0000-0003-0777-7392]Dingshan Deng
Lunar and Planetary Laboratory, The University of Arizona, Tucson, AZ 85721, USA
0000-0003-0522-5789]Maxime Ruaud
NASA Ames Research Center, Moffett Field, CA 94035, USA
Carl Sagan Center, SETI Institute, Mountain View, CA 94035, USA
0000-0002-3311-5918]Uma Gorti
NASA Ames Research Center, Moffett Field, CA 94035, USA
Carl Sagan Center, SETI Institute, Mountain View, CA 94035, USA
Max Planck Institute for Astronomy, Königstuhl 17, D-69117, Heidelberg, Germany
Institut de Radioastronomie Millimétrique (IRAM), 300 rue de la Piscine, 38406, Saint-Martin d’Héres, France
Laboratoire d’Astrophysique de Bordeaux, Université de Bordeaux, CNRS, B18N, Allée Geoffroy Saint-Hilaire, 33615 Pessac,
France
0000-0002-4147-3846]Miguel Vioque
Joint ALMA Observatory, Alonso de Córdova 3107, Vitacura, Santiago 763-0355, Chile
National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA 22903, USA
0000-0002-1575-680X]James Miley
Joint ALMA Observatory, Alonso de Córdova 3107, Vitacura, Santiago 763-0355, Chile
National Astronomical Observatory of Japan, NAOJ Chile Observatory, Los Abedules 3085, Oficina 701, Vitacura, Santiago, Chile
We present an ACA search for emission at 492 GHz toward large T Tauri disks (gas radii ≳ 200 au) in the ∼ 1-3 Myr-old Lupus star-forming region. Combined with ALMA 12-m archival data for IM Lup, we report detections in 6 out of 10 sources, thus doubling the known detections toward T Tauri disks.
We also identify four Keplerian double-peaked profiles and demonstrate that fluxes correlate with ^13CO, C^18O, and ^12CO (2-1) fluxes, as well as with the gas disk outer radius measured from the latter transition. These findings are in line with the expectation that atomic carbon traces the disk surface. In addition, we compare the carbon and CO line luminosities of the Lupus & literature sample with detections with predictions from the self-consistent disk thermo-chemical models of <cit.>. These models adopt ISM carbon and oxygen elemental abundances as input parameters. With the exception of the disk around Sz 98, we find that these models reproduce all available line luminosities and upper limits with gas masses comparable to or higher than the minimum mass solar nebula and gas-to-dust mass ratios ≥ 10. Thus, we conclude that the majority of large Myr-old disks conform to the simple expectation that they are not significantly depleted in gas, CO, or carbon.
§ INTRODUCTION
Recent high-resolution images of circumstellar disks around young (∼ 1-10 Myr) stars have revealed a variety of complex structures <cit.>, some of which point to advanced planet formation. Hence, these disks provide an opportunity to study planet formation in action. Yet, some of their fundamental properties, such as the gas disk mass and the gas-to-dust mass ratio (hereafter Δ_ gd) - which determine what planets can form as well as the disk lifetime <cit.> - remain poorly constrained <cit.>.
CO is expected to be the most abundant and easily observed tracer of molecular hydrogen, the main gaseous reservoir in young disks, but recent studies have questioned this expectation, especially for young solar analogues (hereafter, T Tauri stars). ALMA surveys targeting CO isotopologues
<cit.>
have reported line fluxes far lower than early theoretical estimates obtained by scaling the dust disk mass with the interstellar medium (ISM) Δ_ gd of 100 and using the canonical abundance of CO/H_2 ≈ 10^-4 <cit.>. The mechanisms proposed to explain the CO under-abundance can be grouped into variants of two scenarios: (i) the gas disk has been dispersed and therefore Δ_ gd≪ 100 <cit.>, or (ii) Δ_ gd is still high but CO is not a good tracer of the gas mass because of chemical processing that transforms CO into other less easily observable species combined with dynamical processes that sequester CO into midplane ice and redistribute it inward via pebble drift <cit.>.
It is important to note that these early disk models did not include all the relevant physical and chemical processes for interpreting CO emission lines. For instance, <cit.>, <cit.>, <cit.>, and <cit.> include CO freeze-out and/or photodissociation in varying degrees of sophistication but no isotopologue selective dissociation, see <cit.> instead for its implementation. Conversion of CO into CO_2 ice on dust grains was also lacking but later found to be significant at radial snowlines where the temperature is ≲ 30 K and for cosmic-ray ionization rates ≳ 5 × 10^-18 s^-1 <cit.>.
Recently, <cit.> developed models with a self-consistent gas density and temperature structure coupled with vertical pressure equilibrium and further explored the effect of grain surface chemistry on the vertical location of the CO snowline. They found that photoprocessing of the ice by stellar and interstellar FUV photons dominates at the interface between the molecular layer and the disk midplane and efficiently converts CO into CO_2 ice, shifting the vertical CO snowline higher up and effectively reducing the amount of gas-phase CO <cit.>. These new models adopt ISM-like elemental abundances as input parameters and reproduce optically thin C^18O line fluxes with gas-to-dust ratios of ∼ 100 without requiring any other chemical or dynamical processes to reduce CO. As such, RGH22 argue that there is no severe CO depletion and C^18O emission is a good tracer of the gas disk mass. This argument is supported by Deng et al. 2023 in press (arXiv:4990967), where a favorable comparison between model predictions and observations is extended to C^18O velocity and radial profiles.
Alongside carbon monoxide, searches for its dissociation products, especially neutral atomic carbon, have been carried out to investigate elemental abundance depletions. Atomic carbon forms in a thin region between the CO photodissociation and carbon ionization fronts <cit.>, hence it is expected to be abundant at the surface of protoplanetary disks. In addition, forbidden lines are predicted to be optically thin <cit.>, hence valuable probes of the elemental carbon abundance in the disk surface. <cit.> carried out one of the first searches for atomic carbon focusing on CQ Tau, a Herbig Ae star whose disk was found to have a low CO-to-dust ratio <cit.>.
The comparison of their 1-0 and 2-1 line upper limits with several chemical model predictions indicated a Δ_ gd of only a few for this disk, suggesting that it may be at a transition phase between protoplanetary and debris. Deeper searches in the line toward more sources led to the first bona fide disk detections, one around an Herbig Ae star and two around T Tauri stars <cit.>. Modeling of these lines and other CO isotopologues led <cit.> to conclude that HD 100546 is at most moderately depleted in gas-phase carbon while TW Hya is depleted by two orders of magnitude compared to the ISM value. More recently, <cit.> reported four new disk-like detections and estimated [C/H] depletion factors of ∼ 150 in DL Tau, ∼ 15 in DO Tau, and only ∼ 5 in DR Tau. Clearly, more detections of atomic carbon are necessary to establish the extent of carbon depletion in Myr-old disks.
Here, we summarize results from our ALMA survey targeting large gaseous disks around T Tauri stars in the nearby ∼ 1-3 Myr-old Lupus star-forming region <cit.>. Sect. <ref> discusses our observational strategy and analysis which, combined with archival data for IM Lup, led to six new detections. In Sect. <ref> we demonstrate that the Lupus detections are consistent with disk emission and fluxes correlate with literature fluxes from ^12CO, ^13CO, and C^18O, as expected if traces the disk surface. We also discuss the Lupus and the literature sample with detections in the context of the RGH22 models (Sect. <ref>) and already published inferences about CO and [C/H] depletion (Sect. <ref>). Finally, we provide a summary and outlook in Sect. <ref>.
ccccccccccccccc
Stellar and disk properties relevant to this study
0pt
ID Name (Other Name) 2MASS Lupus Dist. M_* LogṀ_ acc
F_ 1.3mm R_ dust i F_ C^18O F_ ^13CO F_ ^12CO R_ CO
Ref.
sub-group (pc) (M_⊙) (M_⊙/yr)
(mJy) (”) (deg) (Jy km/s) (Jy km/s)
(Jy km/s) (”)
1 Sz 71 (GW Lup) J15464473-3430354 I 155.20 0.41 -9.03 69.15 0.63 -40.8 0.076a 0.58a 2.175 1.45 1,2,3
2 RY Lup J15592838-4021513 off-cloud 158 1.27 -8.05 86.11 0.89 68.0 0.765 2.502 6.615 1.67 1,2,3,4
3 SSTc2dJ160002.4-422216 J16000236-4222145 IV 160.39 0.19 -9.48 49.96 0.75 65.7 0.052 0.976 2.96 1.77 1,2,4
4 Sz 133 J16032939-4140018 IV 158 ≈0.7b – 27.02 0.95 78.5 <0.054 0.282 2.12 1.59 5,2,4
5 Sz 91 J16071159-3903475 III 159.39 0.52 -9.08 9.52 0.77 51.7 <0.087 1.097 2.7 2.25 1,2,4
6 Sz 98 (HK Lup,V1279 Sco) J16082249-3904464 III 156.27 0.55 -7.44 103.35 0.95 -47.1 <0.054 <0.081 3.55 1.79 1,2,4
7 SSTc2dJ160830.7-382827 J16083070-3828268 III 158 1.27 -9.2 38.76 0.91 -74.0 1.454 2.736 7.281 1.97 1,2,4
8 V1094 Sco J16083617-3923024 III 158 0.83 -8.01 180.0 1.67 -55.4 1.0a 6.6a 29 2.19 1,6,7,3,4
9 Sz 111 J16085468-3937431 III 158.37 0.52 -9.47 60.29 0.67 -53.0 0.586 2.187 5.963 2.31 1,2,4
10 IM Lup (Sz 82)c J15560921-3756057 II 155.82 0.72 -7.85 205.0 1.5 -48.0 1.325 5.893 13.7 2.6 1,2,4
F_C^18O, F_^13CO, and F_ ^12CO are line fluxes for the 2-1 transition of the respective CO isotopologues. R_ dust and R_ CO are the dust and gas radii containing 90% of the continuum and of the ^12CO(2-1) transition flux.
aThe reported line fluxes in <cit.> are significantly different from those in Deng et al. in prep., which rely on deeper ALMA exposures. Therefore, for this study, we have opted to utilize the latter values, which come with an estimated uncertainty of 20%.
bUnder luminous source due to edge-on disk, approximate mass from effective temperature and cluster age.
cIM Lup was not part of our ACA survey because of already available ALMA Band 8 12-m data covering the line
1. <cit.>; 2. <cit.>; 3. Deng et al. in prep.; 4. <cit.>; 5. <cit.>; 6. <cit.>;
6. <cit.>
§ OBSERVATIONS AND ANALYSIS
Our ACA sample was selected from the ALMA Band 6 Lupus survey <cit.> to include disks with large gas outer radii as measured from the ^12CO (2-1) transition and a broad range of millimeter continuum flux densities. The first criterion was applied to boost the detection rate because this line is expected to probe gas as far out as the ^12CO (2-1) line, see Sect. <ref>. The second criterion was applied to investigate disks with a large range of dust (and possibly gas) masses. IM Lup, which hosts one of the largest disks in Lupus <cit.>, was not included in our ACA sample because already available ALMA 12-m data cover and detect the 1-0 line. Table <ref> presents the properties of both our ACA sample and IM Lup that are relevant to this study, including their respective stellar and disk characteristics. When collecting the literature isotopologue fluxes from <cit.>, we noticed an unrealistic C^18O upper limit of 0.07 Jy km/s for the large disk of V1094 Sco. van Terwiska priv. comm. commented that this upper limit is unreliable because it was computed over the 0.25” beam size of shallow observations. Thankfully, V1094 Sco, as well as Sz 71, which was undetected in in both ^13CO and C^18O in <cit.>, have deeper CO isotopologue exposures through the ALMA Large Program AGE-PRO (2021.1.00128.L, PI: K. Zhang). These newer observations detect both disks in ^13CO and C^18O and find that V1094 Sco is ∼ 14 × brighter in C^18O while Sz 71 is ∼ 7 × brighter in ^13CO than indicated by the <cit.> upper limits. As such, we adopt the AGE-PRO CO isotoplogue fluxes from Deng et al. in prep. in this study.
§.§ Observations
Our ACA Band 8 data were acquired between February 2020 and August 2021 as part of the program 2019.1.00927.S (PI: I. Pascucci, ALMA Cycle 7). A scheduling block including all nine sources was repeated 19 times during this time frame; 13 executions passed Q0 quality assurance, enabling further calibration. The setup included a spectral window (SPW) centered around the line at 492.161 GHz with a total bandwidth of 1 GHz and 2048 channels (spectral resolution ∼ 0.3 km/s) as well as a main continuum SPW centered at 491 GHz with a bandwidth of 2 GHz and 128 channels. The other two SPWs were centered around 480.269 GHz and 478.633 GHz for the serendipitous discovery of CH_3OH emission and had 2 GHz bandwidth with 2048 channels. As no CH_3OH lines were detected, these two latter SPWs are also used to image the continuum (Sect. <ref>). Requested exposure times were ∼ 1.3 h per source to achieve an rms of 0.1 Jy/beam over twice the spectral resolution in the SPW covering the line. Actual exposure times per source varied from 37 min for Sz 71 to 56 min for V1094 Sco, see Appendix <ref> for details.
The IM Lup 12-m data were acquired in March 2016 as part of the program 2015.1.01137.S (PI: T. Tsukagoshi, ALMA Cycle 3). The setup included two SPWs with 2 GHz bandwidth for the continuum centered at ∼ 480 and ∼ 478 GHz and two SPWs with 59 MHz bandwidth and 240 channels (channel width 0.15 km/s) to cover the and the CS (10-9) lines. IM Lup was observed for ∼ 9 min with 41 antennas delivering a synthesized beam of
0.37 ”× 0.32 ” and PA of 75^∘.
§.§ Data reduction and analysis
ACA Sample
The ACA data were initially manually calibrated by the North American ALMA Science Center using the CASA pipeline version 6.2.1.7. Using the same pipeline version, we first split off the calibrated data and concatenate the 13 executions for our targets.
Next, we flag the channels where the line could be detected (500-1150 for SPW 0) and channels where there is strong water vapor absorption (780-850, 1250-1490, and 1770-1870 in SPW2) and generate a continuum measurement set per target. From this set, we produce a first image per target using the task with Briggs weighting, robust=0.5, and a shallow threshold of 10 mJy which is ∼ 3 times the expected rms from the ALMA sensitivity calculator for an exposure of 45 minutes in Band 8. We also use a mask centered at each target's coordinates (obtained from a on the continuum measurement set) that is two times the ACA synthesized beam[At 491 GHz the primary beam of a 7m ACA antenna is ∼ 21 ” while that of a 12m ALMA antenna is ∼ 13 ”] (3.2 ”× 2 ”, PA of -68^∘) to cover most of the expected emission based on the CO emitting radii (see Table <ref>). All sources are detected in the continuum. The columns in Table <ref> provide a first estimate of the peak signal-to-noise (hereafter, S/N) within the mask and of the rms in an annulus from 7” to 8” centered around each target.
We also performed self-calibration on the continuum for all sources. After experimenting with the input parameters, we found that the best results were achieved by performing one phase self-calibration combing all spectral windows and scans with an infinite solution interval and by excluding the last execution. This step yielded improved rms, hence peak-to-rms S/N, from factors of ∼ 1.5 to ∼ 3.
Using the same parameters, one amplitude self-calibration slightly improved the rms from ∼ 3% up to ∼ 35% depending on the source, except for Sz 133 and Sz 91. As such, amplitude self-calibration was not applied for these two sources. The continuum columns in Table <ref> provide the achieved rms and peak-to-rms S/N from the primary-beam corrected images. Figure <ref> in Appendix <ref> shows the continuum images and the ellipse obtained by fitting a 2D Gaussian with the command. The only source that is marginally resolved in the continuum is V1094 Sco with major and minor axes of 4.1 ”× 2.6 ”. The source flux density (F_ 0.6mm) and associated uncertainty obtained via are also summarized in Table <ref>.
lcc|ccc|cccc[h]
Results on primary-beam corrected images and datacubes.
0pt
Source 2c|Cont. 3c|Cont.
4c
rms S/N rms S/N F_ 0.6mm
rms F_ CI v_ c, lsr σ
(mJy/beam) (mJy/beam) (mJy)
(Jy/beam) (Jy km/s) (km/s) (km/s)
Sz 71 4.94 46 1.80 133 255.1±2.3 0.13 <1.23
RY Lup 4.73 100 1.96 242 501.4±2.5 0.13 1.52±0.28 4.8±1.2 2.8±0.7
J16000236 2.45 70 1.35 126 179.8±2.1 0.13 <0.85
Sz 133 2.57 42 1.75 62 115.8±2.1 0.14 <0.59
Sz 91 2.56 32 1.98 42 93.1±2.5 0.16 2.90±0.19 3.6±0.1 1.4±0.1
Sz 98 4.31 69 2.24 135 337.3±2.4 0.13 <1.77
J16083070 3.76 71 2.05 133 281.3±2.4 0.13 2.47±0.12 5.2±0.1 2.5±0.1
V1094 Sco 7.97 96 2.78 280 1220.6±8.8 0.12 4.99±0.25 5.22±0.08 1.26±0.06
Sz 111 3.63 87 1.84 173 339.1±2.8 0.14 2.45±0.13 4.18±0.07 1.06±0.05
IM Lupa 0.85 226 0.67 306 1574.0±34 0.04 11.3±0.5b 4.57±0.07 1.5b
aIM Lup is not part of our ACA survey and results reported here are from archival ALMA 12-m data, see text for more details.
bA Gaussian profile is not a good representation for the extracted velocity profile of IM Lup. Hence, the line flux (F_ CI) is obtained from straight integration and σ is calculated directly from the FWHM of the spectrum.
To image the line we first performed a continuum subtraction in the spectral window covering the transition with the command . Next, we produced initial datacubes with down to the threshold of 10 mJy/beam as the initial continuum images. In parallel, we applied to the continuum-subtracted measurement sets the phase and, when available, amplitude solutions obtained on the continuum. We then cleaned these self-calibrated data down to 3 times the rms of the continuum images.
To evaluate if the emission is detected and the effect of self-calibration, we extract the non-deprojected spectra using v1.5 <cit.> with an outer radius equal to the major axis of the beam. We also compute moment zero (integrated intensity) maps with <cit.> with a sigma clipping two times the rms and within channels corresponding to velocities where emission is detected in the spectra (V_ LSR= 4.5 ± 5 km/s). By comparing these products we find that the rms is essentially unchanged between the no self-calibrated and the self-calibrated datacubes and is slightly larger than the requested one (see Table <ref>). For the sources with a detection (RY Lup, Sz 91, J16083070, V1094 Sco, and Sz 111) the line flux is typically improved but only by ∼ 5-10%. This negligible improvement in Band 8 ACA line data after self-calibration has been also noted by <cit.>. Nevertheless, we proceed with the self-calibrated datacubes and fit a 2D Gaussian to the moment zero maps with emission (Figure <ref>) to estimate the outermost radius for the extraction of the spectra. We find that for RY Lup, Sz 91, J16083070, and Sz 111 the emission is confined within the ACA primary beam while for V1094 Sco estimates a major and minor axis of 8.1×4.2”. To cover most (> 90%) of the emission, we adopt 3.2” as the extraction radius for all sources except for V1094 Sco, for which we use a radius of 6.4”. We have checked that for V1094 Sco this radius encompasses all the emission within 2 times the rms in the moment zero map and tested that further increasing the extraction radius results in a significantly larger increase in the noise than in the line flux.
The non-deprojected extracted spectra are shown in Figure <ref>. Of the sources with a detection, the spectra from Sz 91, J16083070, and V1094 Sco show a double-peaked Keplerian profile demonstrating that this line traces disk emission. A single Gaussian is a good representation for all the profiles (see Figure <ref>) and we have verified that for all sources, including Sz 91, J16083070, and V1094 Sco, a straight integration under the line gives the same flux as the one obtained from the Gaussian fit within the uncertainties quoted in Table <ref>, see F_ CI column. These uncertainties are obtained in a Monte Carlo fashion. First, we generate 1,000 spectra per source by randomizing the flux density at each velocity bin from a normal distribution with a standard deviation equal to the rms outside the line. Next, we fit a Gaussian to each randomly-generated spectrum and take as uncertainty the standard deviation of the 1,000 Gaussian fluxes. In case of non-detections, we fit a first-order polynomial between -30 and -10 km/s and calculate the rms as the standard deviation of the data minus the best fit. Table <ref> reports a 3σ upper limit obtained from this rms and a Gaussian line profile with a line width of 1.3 km/s (the median value of the detections) is shown in Figure <ref> with a cyan dashed line.
IM Lup
We retrieved ALMA 12-m archival data that were first manually calibrated by the ALMA NAOJ with the CASA pipeline version 4.6.0. We use the more recent 6.5.0 version for subsequent processing of the only execution block available for this observation. First, we flagged all the channels where the line could be detected (51-188) as well as additional channels with apparent emission lines and generate a continuum measurement set. Next, we self-calibrate the continuum combining spectral windows and scans to improve the S/N ratio. We performed three iterations of phase-only self-calibrations (intervals of 360, 240, and 160 s) and then one amplitude self-calibration. The reference antenna (DV16) was chosen from the log based on its data quality and position in the array. The self-calibrated continuum visibility was then imaged with the task using a Briggs robust parameter of 0.5 and an elliptical mask (2.3× 1.7 with PA = 145^∘) encompassing the emission. Table <ref> summarizes the improvement in the continuum S/N. We then apply the calibration tables to the original unflagged and spectrally unaveraged visibilities and split the spectral window for the following line imaging. We subtract the continuum using the task and produce a preliminary datacube using the task with Briggs robust = 0.5 and an elliptical mask that encloses the emission. In the preliminary line datacube, the emission shows a clear Keplerian pattern. As such, we construct a Keplerian mask to CLEAN again the continuum-subtracted visibilities. The Keplerian mask uses the disk inclination and position angle from a Gaussian fit to the continuum (49^∘ and 145^∘, respectively), the mass of IM Lup (0.72 M_⊙, see Table <ref>), and an outer radius that is large enough to include all the emission seen in the preliminary datacube. The CLEANed spectral line cube achieves better image quality with smaller rms (Table <ref>) compared to the pipeline-generated non self-calibrated data. The self-calibrated continuum and moment zero maps are shown in the upper panels of Figure <ref>.
To characterize the continuum and emission we apply similar steps to those described for our ACA sample. The continuum flux is obtained by fitting a 2D Gaussian with the command. The spectrum is extracted with using a maximum radius of 3” which maximizes the flux and encompasses the emission from the moment zero map. Because a Gaussian profile is not a good representation of the extracted spectrum (see lower panel of Figure <ref>), the flux reported in Table <ref> is from direct integration under the emission line and the line width is also measured directly on the extracted profile. We also checked that the flux obtained via integration under the line is the same within the quoted uncertainty as that obtained from the stacked deprojected spectrum using a Gaussian fit.
§ RESULTS AND DISCUSSION
All of the 10 Lupus disks investigated here are firmly detected in Band 8 in the continuum with S/N ranging from 42 to 280 (see Table <ref>). The dust continuum emission is confined within the large ACA beam (3.2 ”× 2 ”) for all sources except for V1094 Sco for which we report a marginal extension (2D Gaussian of 4.1 ”× 2.6 ”). This is in line with previous 1 mm ALMA observations that find its continuum emission extending out to 300 au from the star, a radial extension only comparable to IM Lup and ∼ 5 times larger than other disks in Lupus <cit.>. The line is detected in 6 out of 10 disks with integrated fluxes that range from ∼ 5 (RY Lup) to ∼ 23 (IM Lup) times the reported uncertainties, see Table <ref>. First, we discuss empirical evidence that the line in our Lupus sample traces the gaseous disk surface (Sect. <ref>). Next, we compare the and CO isotopologue luminosities to the RGH22 theoretical predictions and find no need to invoke significant CO or carbon depletion for the large disks discussed in this paper (Sect. <ref>). Finally, we examine these findings in the context of published gas-to-dust and [C/H] ratios (Sect. <ref>).
§.§ emission as a probe of the gas disk surface
The first piece of evidence favoring disk emission for the line comes from the velocity centroids and profiles which are resolved even at the ACA spectral resolution of ∼ 0.3 km/s. The line centroid (v_ c, lsr in Table <ref>) of each source falls within one standard deviation of the median of the stars in its Lupus sub-group (see Table <ref> and ).
For RY Lup, Sz 111, and IM Lup, whose stellar radial velocities have been precisely measured via high-resolution optical spectra <cit.>, the centroids are within 2σ of the reported values.
None of the Lupus profiles exhibit signs of outflowing or infalling material, unlike the profiles of FM Cha and WW Cha observed in <cit.>. This difference is likely due to the fact that the Lupus sources have a lower visual extinction (A_V < 2) and are more evolved than those selected by <cit.>.
The median FWHM of 3 km/s suggests broadening beyond thermal effects. If we consider the temperature corresponding to the upper energy level of the transition (23.6 K), the line width would only be 0.3 km/s. However, a FWHM of 3 km/s is consistent with Keplerian broadening around a solar-mass star for a characteristic emitting radius of 100 au, which is the expected radius according to gas disk models (e.g., Fig. 4 in ). In fact, the profiles from IM Lup and J16083070, and to a lesser extent, Sz 91 and V1094 Sco, show double-peaked profiles as expected from gas in a Keplerian disk. In the case of IM Lup, we have also the advantage of deeper observations of its CO isotopologues through the ALMA MAPS program <cit.>. The lower panel of Figure <ref> demonstrates the remarkable similarity between the velocity profile from the ^13CO (2-1) line, which is tracing the disk surface <cit.>, and the line. This comparison suggests that the line probes the surface of a Keplerian disk.
To further explore this inference, we search for correlations between line detections and upper limits with other star/disk properties collected in Tables <ref> and <ref>, scaling all values to a reference distance of 160 pc. The upper panels of Figure <ref> show relations with quantities tracing the dust continuum emission (F_ 0.6mm and F_ 1.3mm), the dust radial extent (R_ dust), and the stellar mass accretion rate (Ṁ_ acc). The lower panels summarize the relations with quantities probing the gas content (F_ C^18O, F_ ^13CO, and F_ ^12CO line fluxes for the 2-1 transition) and gas radial extent (R_ CO). Given the significant number of non-detections in our Lupus sample and the large errorbars for some of the quantities (e.g., R_ CO), we use the routine v0.2.5[At the time of submission, there was an error in that was patched locally. This edit can be seen at https://github.com/privong/pymccorrelation/compare/
pymccorr...Bennett-Skinner:pymccorrelation:patch-1.] <cit.> to carry out the generalized non-parametric Kendall's τ test and investigate whether the aforementioned stellar/disk properties are correlated with the emission.
The Kendall’s τ for uncensored data is calculated from two matrices, a and b, a_ij is -1 if X_i > X_j, 0 (or uncertain) if X_i = X_j, and 1 if X_i < X_j, where X_i is the ith value of the independent variable; b_ij is calculated similarly for the dependent variable. To include non-detections adopts the method of <cit.>: if X_j is an upper limit, it is considered less than X_i (a_ij = -1), only when X_i > X_j and X_i is a detection or lower limit, see <cit.> for a full description of the methodology. Measurement uncertainties are accounted for with an Monte Carlo approach that randomly draws every data point independently from a Gaussian with a mean and standard deviation of its reported value and error <cit.>. For each pair of variables, we ran 10,000 tests using and report in Table <ref> the
median value of Kendall's τ, a value running from -1 to 1 indicating the direction of the correlation, and p, the percent probability that two quantities are uncorrelated.
The frequency distribution of Kendall's τ from these tests is not necessarily Gaussian, so the median value may differ from the value obtained when not accounting for the uncertainty in the data, hence our choice of reporting also the 16th and 84th percentile values of τ. The large uncertainties in the value of τ indicate the need for further observations, however, we stress that in every instance where the median value of τ indicates significance, barring the already marginal F_ C^18O correlation, the 16th and 84th percentile values do as well.
lcc|cc
Summary of the Kendall's τ tests.
0pt
Quantity 4cF_ CI @ 160 pc
@ 160 pc 2c|Lupus 2cLupus+lit.
τ(16th,84th) p(16th,84th) τ(16th,84th) p(16th,84th)
F_ 0.6mm 0.39(0.37,0.42) 12(9,13) - -
F_ 1.3mm 0.19(0.17,0.23) 45(35,51) 0.18(0.16,0.21) 32(26,37)
R_ dust 0.30(0.14,0.44) 23(8,52) 0.25(0.23,0.28) 17(13,22)
Ṁ_ acc 0.26(0.20,0.29) 34(28,45) 0.03(0,0.05) 88(80,96)
F_ C^18O 0.52(0.44,0.59) 3.7(1.8,7.6) 0.43(0.30,0.50) 2.1(0.7,11)
F_ ^13CO 0.62(0.57,0.67) 1.2(0.7,2.2) 0.51(0.47,0.54) 0.6(0.4,1.2)
F_ ^12CO 0.56(0.51,0.58) 2.4(2.0,3.9) 0.30(0.28,0.33) 10(7.5,13)
R_ CO 0.64(0.51,0.75) 1.0(0.3,3.9) 0.44(0.41,0.47) 2.2(1.5,3.2)
Median values, 16th, and 84th percentiles for τ and p. τ gives the direction of the correlation (positive for τ > 0) while p is the percent probability that two quantities are uncorrelated. Entries with p less than 5% indicate a likely correlation, hence are in boldface.
Restricting ourselves to the Lupus sample, we find that the emission is likely positively correlated with the gas outer radius (R_ CO) and with the ^13CO, C^18O, and ^12CO (2-1) emission (F_ ^13CO, F_ C^18O, and F_ ^12CO), hence similarly probing the disk surface. However, only the first three correlations persist when adding to our Lupus sample 6 more T Tauri sources from different star-forming regions that have detections likely tracing a disk (see Appendix <ref> for details on these sources, gray symbols in Figure <ref>, and the last columns of Table <ref>). The absence of a correlation with the ^12CO emission in the extended sample might be due to the main CO isotopologue being more affected by cloud absorption ( and Ansdell priv. comm.) and sometimes tracing extended structures unrelated to the circumstellar disk, e.g. envelopes and outflows <cit.>.
C^18O exhibits a weaker correlation with than ^13CO, likely because it probes gas closer to the disk midplane <cit.>. We also note that the F_ CI and F_ ^13CO follow very closely a one-to-one linear relation. Indeed, when
using <cit.> to account for upper limits and uncertainties on the
Lupus+literature sample[We exclude Sz 98 (ID 6) since it is not detected in either of the lines.] we find F_ CI= 1.07(± 0.33) × F_ ^13CO + 0.26(± 0.89) where fluxes are in Jy km/s (black line in the F_ CI-F_ ^13CO panel of Figure <ref>).
Finally, the lack of correlations with dust properties, in the Lupus sample as well as in the extended sample, demonstrates that the emission is not affected by the amount or radial extent of mm-sized grains which mostly trace icy pebbles in the disk midplane <cit.>. In conclusion, empirical evidence from the profiles and correlations with other disk tracers strongly suggest that the emission probes gas at the disk surface. We will further test this inference in the next sub-sections by comparing our observations more directly to theoretical predictions.
§.§ Comparison with the RGH22 thermochemical disk models
Recently, RGH22 carried out a grid of thermochemical disk models adopting ISM carbon and oxygen elemental abundances as input parameters. In addition to isotopologue-selective photodissociation, they added three-phase grain-surface chemistry with CO conversion into CO_2 ice being a major reaction and adopted vertical hydrostatic equilibrium
to derive a self-consistent gas density and temperature. The CO conversion into CO_2 ice shifts the CO snowline vertically away from the midplane, thus reducing the amount of CO on the disk surface.
Within a factor of a few, their predicted C^18O(3-2) luminosities match observations of Lupus and Chamaeleon I disks detected in this line. This result led RGH22 to argue that C^18O is a good tracer of the gas disk mass and that no severe elemental or CO depletion by other chemical or dynamical processes is necessary to reconcile theoretical predictions with observations.
Here, we take the comparison a step further and test whether these same models can explain the emission from three CO isotopologues as well as the line which is the focus of this study.
First, we use the RGH22 model with a disk outer radius of 300 au, a minimum mass solar nebula (MMSN) gas of 0.01 M_⊙, and Δ_ gd of 100 (see their Table 1) to compare the emitting surfaces of various carbon species. Figure <ref> shows that the line probes the uppermost surface of the disk down to z/r∼ 0.2, thus overlapping with the ^12CO and ^13CO (2-1) emitting surfaces, while the C^18O (2-1) emission is concentrated at lower altitudes (z/r∼ 0.1). We note that the predicted CO emitting surfaces agree with those empirically derived from the ALMA MAPS survey: in five disks observed at high sensitivity and spatial resolution, ^12CO (2-1) emission is found to be mostly at z/r >0.3 while ^13CO and C^18O (2-1) lie below, at z/r ≈ 0.1-0.2 <cit.>. In the context of this study, it is worth mentioning that the column density of carbon is set by photoionization of C into C^+ and photodissociation of CO to C and, in agreement with <cit.>, the line is found to be mostly optically thin. The correlation among fluxes reported in Sect. <ref> could be attributed to the overlapping emitting surfaces between the line and the ^12CO, ^13CO, and, to a lesser extent, C^18O (2-1) lines.
Next, we carry out a direct comparison of predicted and observed and CO luminosities vs. dust disk masses (M_ dust), see Figure <ref>. The grid models for an outer disk radius of 300 au (squares) are the same as presented in RGH22 and cover a large range in disk mass (from 3 × 10^-4 to 0.1 M_⊙) and three gas-to-dust mass ratios (Δ_ gd =10, 100, 1000).
To test whether the adopted outer radial cutoff captures most of the emission, we also run 4 models for Δ_ gd = 100 where we extend the radial grid to 600 au (grey diamonds connected by dashed lines in Figure <ref>). This test demonstrates that the ^13CO and C^18O (2-1) emission is confined within 300 au while only ∼ 50% of emission is contained within this radius. Note that in all models the primary input parameters are the dust surface density, the dust mass, and the gas-to-dust mass ratio while the gas structure is computed by solving for vertical hydrostatic pressure equilibrium. However, we compare predicted line luminosities vs. dust disk masses because the latter are constrained by observations. In carrying out this comparison, we also took into account that all models assume
a face-on disk inclination and thus maximum emission for optically thick lines. Since the ^12CO and ^13CO lines are expected to be optically thick we divide the observed fluxes by the cosine of the measured disk inclination before converting them into luminosities. The dust disk mass for the Lupus+literature sample is calculated from 1.3 mm fluxes (Tables <ref> and <ref>) assuming optically thin emission <cit.>, a dust temperature of 20 K, and a dust opacity at 1.3 mm of 1.5 cm^2/g, instead of the 2.3 cm^2/g typically adopted in observational papers <cit.>, to match the RGH22 dust properties. In summary, the panels shown in Figure <ref> constrain the dust mass (through the mm flux density), gas mass (through the C^18O line when detected or the ^13CO line otherwise), and carbon content (through the flux) of a disk. Furthermore, the ^12CO (2-1) and fluxes are sensitive to the gas outer radius.
We start by commenting on the disk mass and Δ_ gd. The Lupus+literature sample covers more than an order of magnitude in M_ dust. All disks with a C^18O(2-1) detection lie above the RGH22 Δ_ gd=10 track, half of them are actually above Δ_ gd=100 (bottom right panel in Figure <ref>). For sources with a C^18O upper limit, perhaps indicative of a low gas mass, we can use the ^13CO luminosities (bottom left panel in Figure <ref>) to gauge their gas content.
With the exception of Sz 98 (ID 6), all the sources are at or above the Δ_ gd=10 track, with the Lupus disks being closer to or above Δ_ gd=100.
The ^13CO upper limit from Sz 98 is a factor of a few below the Δ_ gd=10 track, hence this disk might have experienced significant (more than a factor of 10) gas or CO depletion. Deeper observations of the rare CO isotopologues as well as other gas mass tracers (e.g., N_2H^+ ) would be useful to pin down the extent and origin of this depletion. Among the literature sources, DL Tau is the only one with an C^18O upper limit and its ^13CO (2-1) and fluxes point to a depletion in gas mass (or carbon) of a factor of 10. Still, its gas mass is > 0.003 M_⊙ which is about three times the mass of Jupiter. In summary, based on the data at hand and the RGH22 models, all of the disks investigated here, except Sz 98, have more than enough mass to form a Jupiter mass planet. Some of them, like J16083070 (ID 7), V1094 Sco (ID 8), and IM Lup (ID 10), have disks as massive as ∼0.1 M_⊙, i.e. ten times the MMSN. This result agrees with and expands upon what was inferred in RGH22 where the comparison was restricted to the Lupus and Chamaeleon I disks with C^18O (3-2) detections.
The upper two panels of Figure <ref> cover lines that trace the uppermost disk surface and are most sensitive to the gas disk outer radius. J16083070 (ID 7), V1094 Sco (ID 8), IM Lup (ID 10), and DM Tau are the largest disks and, indeed, among the strongest emitters in the ^12CO (2-1) and lines. On the opposite end, TW Hya and DR Tau have the smallest CO gas disk radii (∼ 180 au) and the lowest luminosities, a factor of ∼ 5 below the Δ_ gd=100 track for a gas disk radius of 300 au. Even considering their smaller gas disk radii, a depletion in carbon of a factor of a few may be needed to explain their low luminosities. A similar conclusion has been reached in RGH22 for TW Hya with a disk model tailored to this source that can also reproduce the ^13CO (2-1) flux with Δ_ gd=100, see their Fig. 8. This highlights the importance of target-specific modeling, see also Deng et al. 2023 in press (arXiv:4990967). The luminosities from RY Lup (ID 2), J16000236 (ID 3), Sz 133 (ID 4), and DO Tau also indicate a factor of a few to several depletion in carbon: for ID 3 and 4 there could also be an overall factor of a few depletion in CO or gas based on their C^18O fluxes (see Figure <ref> bottom right panel). In contrast, for Sz 91 (ID 5), J16083070 (ID 7), V1094Sco (ID 8), Sz 111 (ID 9), IM Lup (ID 10), and DM Tau the lines investigated here do not indicate any depletion in gas, CO, or carbon and, within a factor of a few, are consistent with the Δ_ gd≥ 100 tracks.
At this point it is useful to comment on which star and disk parameters might affect most the line, hence our inference of negligible carbon depletion. As mentioned in Sect. <ref>, atomic carbon forms above the CO photodissociation layer and below the C ionization front. In that layer, UV attenuation is determined by a combination of carbon absorption and by dust. Indeed, we can see in Figure <ref> that, for a fixed gas mass, changing the gas/dust ratio by a factor of 100 changes the luminosity by a factor of ∼ 10. This means that the amount of dust, along with its degree of settling, affects the abundance of carbon at the disk surface. On the opposite, there is only a modest dependence with gas mass: Following one of the Δ_ gd tracks in Figure <ref>, one sees that changing the gas mass by a factor of 100 changes luminosity only by a factor of a few. Results are also not sensitive to different cosmic ray ionizations <cit.> as the cosmic ray ionization rate is much lower than UV photorates at the surface. In addition, the emission is also not sensitive to the overall UV flux because it always arises from an approximately fixed column corresponding to a few UV optical depth <cit.>. This is why the line has been chosen in this and previous studies as a suitable probe for carbon depletion.
§.§ Comparison with results from the literature
Of the 16 Lupus+literature disks discussed in this paper, 11 have previously reported gas and dust disk masses, hence Δ_ gd, while for 5 there are literature constraints on their C/H elemental abundance ratio.
We start by discussing the first group of 11 disks where gas mass estimates have been obtained by matching observed to predicted CO isotopologue fluxes: a) for 7 sources using a grid of physical-chemical disk models obtained with DALI <cit.>, see <cit.>; b) for Sz 71 <cit.> and DM Tau using the grid of parametric disk models by <cit.>; c) for TW Hya and IM Lup (ID 10) by generating individual disk models <cit.>. It is worth mentioning that among these approaches only a) includes isotope-selection dissociation which, according to <cit.>, can decrease the optically thin emission of C^18O by an order of magnitude. In addition, approach a), b), and the individual modeling of TW Hya by <cit.> do not include CO conversion to CO_2 ice which, according to <cit.> and <cit.>, can further decrease the C^18O flux by a factor of a few. Therefore, it is not surprising that the literature C^18O model fluxes are larger than observed and significant gas or CO depletion had to be invoked to reconcile models with observations. For instance, ID 2, 3, 5, 7, and 9 have literature Δ_ gd∼ 3-10 <cit.> while according to the RGH22 grid only ID 3 lies clearly below the Δ_ gd = 100 track and only by a factor of a few. Even lower Δ_ gd (≤ 1) have been reported for ID 1, 4, 6, 10, and TW Hya <cit.>. Among this group, only ID 6 (Sz 98) could be depleted according to RGH22 but, given the current ^13CO upper limit, only by a factor slightly larger than ∼ 10, significantly less than what reported in the literature.
The most discrepant result is that for IM Lup (ID 10), a highly accreting star surrounded by the largest gaseous disk in Lupus. <cit.>
used RADMC3D <cit.> to fit the spectral energy distribution of IM Lup and constrain the disk structure, including the gas and dust density and dust temperature profiles. Next, they ran the chemical code RAC2D <cit.> for 1 Myr to obtain the gas temperature and chemical abundances and finally ran RADMC3D again to obtain ^13CO and C^18O (2-1) and (1-0) cubes to be compared with the MAPS ALMA datacubes <cit.>.
<cit.> can only reproduce the CO column density radial profile for IM Lup when reducing the CO gas abundance by two orders of magnitude with respect to the ISM value of ∼ 10^-4. However, as mentioned in <cit.>, such a large CO depletion cannot be reached for this young (∼ 1 Myr, ) disk even when combining disk chemical processes with turbulent mixing and sequestration of CO ice in the disk midplane <cit.>. We want to emphasize that, based on the RGH22 grid, a significant depletion of CO is not required to explain the integrated ^13CO and C^18O fluxes of IM Lup. Rather, the physical and chemical processes that are included in this grid of models (e.g., freeze-out, selective dissociation, CO conversion into CO_2 ice, and vertical hydrostatic equilibrium) are sufficient to reproduce the CO isotopologue fluxes as well as the high flux (Figure <ref>). According to these models, the disk of IM Lup can have an ISM gas-to-dust ratio of 100 and is more massive than the MMSN. Interestingly, a similarly high gas disk mass can be independently estimated from the right panel of Figure 7 in <cit.> without invoking any extra CO depletion beyond freeze-out and selective photodissociation. It is worth re-stating that RAC2D does not include isotope-selective photodissociation. In addition, it was used in <cit.> mostly to obtain a stable temperature profile and, when varying the CO gas abundance, the chemistry was not rerun. On the other hand, the comparison here is restricted to integrated line fluxes. Dedicated self-consistent gas and dust models of IM Lup would be extremely valuable to evaluate the extent of any radial CO depletion.
Fewer T Tauri stars have been observed in the line than in the main CO isotopologues and, before this study, only 6 sources had a reported detection likely arising from the disk, see Table <ref> in Appendix <ref>. Among these literature sources, the and CO isotopologue emission from DL Tau, DM Tau, DO Tau, DR Tau, and TW Hya were modeled using the DALI code <cit.>. These works report carbon depletion factors with respect to ISM values of ∼ 160, 5, 15, 5, and 100, respectively.
Caution should be taken for the Taurus sources as flux loss of a factor of several and up to an order of magnitude affects the ^13CO and C^18O data used in <cit.>, Sturm priv. comm. This is why here we adopted literature values (see Table <ref>). Based on these values, we find that the generic RGH22 models do not require orders of magnitude depletion in carbon. Even for DL Tau and TW Hya the RGH22 grid suggests carbon depletion much lower than 100, with factors of just ten and a few, respectively. These more modest depletions can be easily accounted for through chemical <cit.> and/or dynamical processes <cit.>.
§ SUMMARY AND OUTLOOK
We have acquired and analyzed ALMA/ACA Band 8 data covering the line at 492.161 GHz for 9 large gaseous disks (R_ CO≳ 200 au) around T Tauri stars in the ∼ 1-3 Myr-old Lupus star-forming region. We have also retrieved and analyzed archival ALMA/12-m Band 8 data for IM Lup whose disk has a CO radius of ∼ 400 au, the largest in the region. Our Lupus sample covers a factor of ∼ 20 in 1.3 mm flux density, hence likely dust disk mass. Finally, to place our Lupus sample into context, we have assembled literature source properties for T Tauri stars with a detection likely arising from a disk, an additional 6 sources. Our results can be summarized as follows:
* Band 8 continuum emission is detected towards all Lupus disks and it is confined within the large ACA beam (3.2 ”× 2 ”) for all sources except for V1094 Sco which is marginally resolved. The continuum emission from IM Lup is clearly resolved with the smaller beam (0.37 ”× 0.32 ”) of the archival 12-m data. These results are in line with already published 1 mm continuum observations and analysis.
* The line is detected in 6 out of 10 Lupus sources with centroids and FWHMs consistent with outer gas (≳ 100 au) in a Keplerian disk: the profiles from IM Lup and J16083070 are clearly double peaked. Thus, our work doubles the sample of detections from T Tauri disks. All six detections are from large CO disks, R_ CO≳ 250 au.
* The emission is not correlated with the dust emission (F_ 0.6 mm and F_ 1.3 mm) or its radial extent (R_ dust). Instead, it is correlated with the gas radial extent (R_ CO), the ^12CO, C^18O, and ^13CO emission, most tightly with the ^13CO (2-1) flux. The correlations with R_ CO and the rare CO isotopologue fluxes persist when adding to the Lupus sample the six additional T Tauri stars with detections from the literature.
When comparing the inferred and the ^12CO, ^13CO, and C^18O (2-1) luminosities to those predicted by RGH22, we find no evidence for significant gas, CO, or carbon depletion in our Lupus sample except for Sz 98. This disk may be depleted in gas or CO by a factor ≳ 10, deeper observations are needed to place firm constraints. Importantly, the integrated line luminosities from IM Lup, a highly accreting star with the largest gaseous disk in the region, are fully consistent with a massive gaseous disk (∼ 0.1 M_⊙) without any CO or carbon depletion beyond what is set by freeze-out, CO conversion into CO_2 ice, and isotope-selective photodissociation.
Our conclusion applies to all literature sources with detections, including TW Hya, with the exception of DL Tau.
For DL Tau, it may be necessary to consider a depletion (or gas or CO or carbon) of up to a factor of 10.
In contrast to the conclusions driven above, several past works have claimed large carbon and/or CO depletion in disks around T Tauri stars <cit.>. Specifically for IM Lup, a reduction in CO of a factor of 100 has been reported to explain its column density radial profile <cit.>.
Some of these inconsistencies appear to arise from inadequate millimeter observations, which are either too shallow or lack the necessary short baselines to detect the entire flux emitted by these large disks. This issue is exemplified by the case of V1094 Sco (Sect. <ref>) and the Taurus literature sources discussed in this paper (Sect. <ref>). Additionally, we have speculated that some of the discrepancies may stem from missing physics in the chemical models used to interpret the data (e.g., isotope-selective dissociation and CO conversion to CO_2 ice, see also and ), as well as a lack of self-consistent dust and gas modeling.
Efficient conversion of CO into CO_2 ice could be investigated via JWST/NIRspec and MIRI-MRS spectroscopy of selected edge-on disks. Along with retrieving the relative column densities of CO and CO_2 ice, the shape of the CO_2 absorption features at ∼ 4.2 and 15 <cit.> may indicate formation on a water-ice coated grain, as predicted by the RGH22 models.
Detailed Benchmark tests should be also carried out to resolve any large discrepancies between model predictions. Additionally, dedicated self-consistent gas and dust models should be developed for disks with spatially resolved CO isotopologue profiles in order to evaluate the degree of any radial CO depletion.
Meanwhile, our analysis, which relies on integrated line fluxes,
indicates that large Myr-old disks may conform to the straightforward expectation that they are not substantially depleted in gas, CO, or carbon.
The authors thank N. Kurtovic, F. Long, J. A. Sturm, and S. van Terwisga for information shared on specific sources which were not readily available from published papers. The authors also thank the AGE-PRO calibration team for sharing the CO isotopologue fluxes of Sz 71 and V1094 Sco in advance of publication. I.P. thanks the NAASC Staff, in particular Sarah Wood, for help with the initial ACA data reduction. I.P., D.D., and U.G. acknowledge support from the NASA/XRP research grant 80NSSC20K0273. Support for M.R.’s research was provided by NASA’s Planetary Science Division Research Program, through ISFM work package ‘The Production of Astrobiologically Important Organics during Early Planetary System Formation and Evolution’ at NASA Ames Research Center.
This paper makes use of the following ALMA data: 2019.1.00927.S and 2015.1.01137.S. ALMA is a partnership of ESO (representing its member states), NSF (USA), and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO, and NAOJ.
ALMA(ACA)
AstroPy <cit.>, CASA <cit.>, matplotlib <cit.>, Scipy (http://www.scipy.org), pymccorrelation <cit.>, specutils <cit.>
§ ALMA OBSERVING LOG
Our Band 8 ACA proposal (2019.1.00927.S, PI: I. Pascucci) was accepted in July 2019 with a priority grade of B. Unfortunately, the COVID-19 pandemic and subsequent shutdown of the ALMA facility prevented achieving the requested sensitivity. Despite these challenges, we are grateful for the dedicated efforts of the ALMA observatory, which enabled acquiring valuable data over the span of ∼ 1.5 years. Table <ref> summarizes the number of antennas and integration time per observing block. Although all of the targets were observed in each observing block, the total on-source integration times are not identical and vary from 37.30 min for Sz 91 to 56.11 min for V1094 Sco. The other on-source integration times are as follows: 50.74 min for Sz 71; 48.22 min for RY Lup; 51.41 min for J16000236; 45.70 min for Sz 133; 47.71 min for Sz 98; 45.70 min for J16083070; and 42.17 min for Sz111.
cccccccc
ALMA Observing Log
0pt
Execution Blocks N_ ant Calibrators Integration Time
(UTC Time) (s)
2020-2-28 9:42:07 10 J1604-4441, J1610-3958, J1924-2914 42:41
2021-6-13 3:47:11 8 J1514-4748, J1604-4441, J1626-2951, J1924-2914 10:05
2021-7-01 1:21:08 8 J1514-4748, J1517-2422, J1604-4441 43:21
2021-7-04 23:59:22 9 J1337-1257, J1514-4748, J1604-4441 42:21
2021-7-05 2:15:02 9 J1604-4441, J1650-5044, J1924-2914 42:21
2021-7-05 23:14:04 9 J1337-1257, J1514-4748, J1604-4441 42:21
2021-7-08 2:30:07 8 J1604-4441, J1650-5044, J1924-2914 43:21
2021-7-09 00:48:46 9 J1514-4748, J1517-2422, J1604-4441 43:21
2021-7-09 2:57:34 9 J1604-4441, J1650-5044, J1924-2914 42:21
2021-7-10 23:27:37 10 J1337-1257, J1514-4748, J1604-4441 42:21
2021-7-11 1:43:52 10 J1604-4441, J1610-3958, J1924-2914 42:21
2021-8-10 1:01:01 8 J1514-4748, J1604-4441, J1924-2914 43:51
2021-8-21 00:51:23 8 J1514-4748, J1604-4441, J1924-2914 43:21
All targets are observed in each execution block but on-source integration times are different, see main text
§ CONTINUUM EMISSION FOR THE ACA LUPUS SAMPLE
A gallery of the self-calibrated Band 8 continuum images for our ACA Lupus sample is provided in Figure <ref>. The best-fit 2D Gaussian is also shown as a black ellipse in each panel. Among this sample the emission from V1094 Sco is the brightest and most spatially extended.
§ ADDITIONAL T TAURI DISKS WITH DETECTIONS
We have searched the literature for additional T Tauri stars with detections. We excluded Herbig Ae/Be stars because they have a much larger FUV luminosity than T Tauri stars and FUV photons drive the dissociation of CO into atomic and ionized carbon, hence the detectable level of emission. We also excluded FM Cha, WW Cha, and FZ Tau because their profiles are not dominated by disk emission but rather by the cloud or an outflow, see <cit.> for details. Our search led to 6 additional T Tauri stars with detections likely arising from the disk <cit.>. Object properties used in this study are summarized in Table <ref>. In the following, we provide a few more details about the collected data.
In relation to dust and gas disk radii, we have preferred those containing 90% of the continuum and of the ^12CO (2-1) flux for consistency with the Lupus sample (see Table <ref>). However, a few systems do not have such estimates. The ^12CO emission from AS205 N is very complex and, by extending to the southern component, likely traces tidally-stripped gas (see Fig. 5 in ), hence a gas disk radius cannot be determined. The only observations available in the ALMA archive for ^13CO and C^18O are from <cit.> but are too low angular resolution to obtain a proper estimate. For DO Tau the only R_ dust available in the literature is the one encompassing 68% of the millimeter flux <cit.> while R_ CO is the radius at half-maximum intensity <cit.>. Finally, the only gas disk radius available for DR Tau is the one inferred from modeling the ^13CO and C^18O emission <cit.>, hence likely represents a lower limit for R_ CO.
Although CO isotopologue fluxes for DL Tau, DO Tau, and DR Tau are also available from <cit.>, there are concerns that these measurements may underestimate the total flux by a significant factor (Strum priv. comm.). In light of this, we have opted to utilize literature fluxes from observations that incorporate short baselines for our study.
llccccccccccccc
Literature T Tauri stars with disk emission.
0pt
ID Source Region Dist M_* LogṀ_ acc F_ [CI] F_ 1.3mm R_ dust i F_ C^18O F_ ^13CO F_ ^12CO R_ CO Ref
(pc) (M_⊙) (M_⊙/yr) (Jy km/s) (mJy) (”) (deg) (Jy km/s) (Jy km/s) (Jy km/s) (”)
AS AS205 N Ophiuchus 142 0.9 -7.4 3.42 377 0.35 15 0.34 1.86 19.23 – 1,2,3,4
DL DL Tau Taurus 159.94 0.7 -7.2 1.36 170.72 0.91 45 <0.15 0.43 7.05 3.75 1,2,5,6
DM DM Tau Taurus 144.05 0.3 -8.0 7.35 89.4 1.23 36 1.12 6.84 15.21 6.04 1,7,5,8,9
DO DO Tau Taurus 141 0.5 -7.6 1.18 123.76 0.20a 37 0.26 1.18 63.7a 1.45a 1,2,10,6,8,9,11
DR DR Tau Taurus 141 0.6 -6.7 0.92 127.18 0.28 5.4 0.62 5.73 37.9b 1.26b 1,2,11,12,13
TW TW Hya TW Hydra 60 0.6 -8.7 4.08 580 0.99 5 0.82 2.72 41.8 3.07 14,15,16,5
F_ [CI] is the flux for the line while F_ C^18O, F_ ^13CO, and F_ ^12CO are for the (2-1) transition. Unless noted below, R_ dust is the dust disk radius encompassing 90% of the 1.3 mm flux density while R_ CO is the gas disk radius enclosing 90% of the ^12CO (2-1) flux. The ^12CO emission from AS205 N is complex, hence a gas disk radius cannot be estimated, see Appendix <ref> for more info.
aFor DO Tau R_ dust encompasses 68% of the mm flux density while R_ gas is from modeling the C^18O and ^13CO emission. The quoted F_ ^12CO flux is from SMA data with a beam size 1.2”×0.9” <cit.>: It is treated here as an upper limit because it likely includes outflow emission <cit.>.
bThe F_ ^12CO flux for DR Tau includes larger scale non-Keplerian emission <cit.>, hence it is treated as an upper limit to the disk emission in our analysis. R_ CO is from modeling the C^18O and ^13CO emission.
1. <cit.>; 2. <cit.>; 3. <cit.>; 4. <cit.>;
5. <cit.>; 6. <cit.>; 7. <cit.>;
8. <cit.>; 9. <cit.>; 10. <cit.>;
11. <cit.>; 12. <cit.>; 13. <cit.>;
14. <cit.>; 15. <cit.>; 16. <cit.>
aasjournal
|
http://arxiv.org/abs/2307.01804v1
|
20230704161759
|
Capturing Local Temperature Evolution during Additive Manufacturing through Fourier Neural Operators
|
[
"Jiangce Chen",
"Wenzhuo Xu",
"Martha Baldwin",
"Björn Nijhuis",
"Ton van den Boogaard",
"Noelia Grande Gutiérrez",
"Sneha Prabha Narra",
"Christopher McComb"
] |
cs.LG
|
[
"cs.LG",
"cs.CG"
] |
Enhancing Quantum Otto Engine Performance in
Generalized External Potential on Bose-Einstein Condensation Regime
Trengginas E P Sutantyo^1
August 1, 2023
==================================================================================================================
High-fidelity, data-driven models that can quickly simulate thermal behavior during additive manufacturing (AM) are crucial for improving the performance of AM technologies in multiple areas, such as part design, process planning, monitoring, and control. However, the complexities of part geometries make it challenging for current models to maintain high accuracy across a wide range of geometries.
Additionally, many models report a low mean square error (MSE) across the entire domain (part). However, in each time step, most areas of the domain do not experience significant changes in temperature, except for the heat-affected zones near recent depositions. Therefore, the MSE-based fidelity measurement of the models may be overestimated.
This paper presents a data-driven model that uses Fourier Neural Operator to capture the local temperature evolution during the additive manufacturing process. In addition, the authors propose to evaluate the model using the R^2 metric, which provides a relative measure of the model's performance compared to using mean temperature as a prediction.
The model was tested on numerical simulations based on the Discontinuous Galerkin Finite Element Method for the Direct Energy Deposition process, and the results demonstrate that the model achieves high fidelity as measured by R^2 and maintains generalizability to geometries that were not included in the training process.
§ INTRODUCTION
Additive manufacturing (AM) has become more than a niche laboratory technology and is becoming increasing vital across various industries. This is largely due to its enormous potential in fabricating complex parts at a lower cost compared to traditional methods like machining or casting. Among the various AM technologies, Directed Energy Deposition (DED) processes have garnered considerable attention because they can build large-scale metal parts up to several meters in size at high deposition rates <cit.>. This paper therefore focuses on DED processes.
However, the adoption of AM is hindered by concerns regarding product quality and process efficiency <cit.>. For instance, Glerum et al. <cit.> identified inconsistencies in the AM process where the same process parameters can lead to varying material properties. Thus, establishing process-structure-property relationships for AM has become a critical research area in AM technologies <cit.>.
The temperature history resulting from the process parameters is a key determinant in several aspects, including melt pool characteristics <cit.>, formation of defects such as lack of fusion and hot cracking <cit.>, metal grain structure <cit.>, and residual stresses caused by high thermal gradients <cit.>. However, obtaining experimental data on temperature history is expensive and also challenging to capture at every point of a part. Therefore, numerical simulation is an attractive, less-expensive alternative for obtaining comprehensive temperature history data at scale.
While many high-fidelity, physics-based simulation models have been developed, such as the method based on Discontinuous Galerkin Finite Element Method (DGFEM) <cit.>, their time cost is still too high for many crucial applications. For example, iterative optimizations such as design for AM <cit.> and process parameter planning <cit.> require a massive number of thermal simulations. Additionally, real-time simulation is needed for in-situ process monitoring and control <cit.>.
Therefore, data-driven models have gained attention as a means to quickly simulate thermal behavior during AM. Mozaffar et al. <cit.> developed a data-driven machine learning (ML) model based on recurrent neural networks (RNN) to predict the thermal histories of arbitrary points in a part built by the DED process. For real-time temperature prediction, Paul et al. <cit.> proposed a framework based on extremely randomized trees (ERT), which is an ensemble of bagged decision trees that use temperatures of prior voxels and laser information as inputs to predict temperatures of subsequent voxels. Stathatos et al. <cit.> proposed an artificial neural network (ANN) that predicts in real-time the evolution of temperature and density for arbitrary long tracks. Roy et al. <cit.> observed that the AM process exhibits a high level of redundancy and periodicity and introduced a geometry representation that extracts features directly from the GCode for a ML model, such as local distances from heat sources and cooling surfaces. However, these models can only achieve good accuracy in predicting the thermal evolution of samples with geometries similar to those in the training dataset and may not perform well on more complex geometries unseen in the training process.
In order to improve the generalizability of ML models for thermal history prediction in complex geometries, Ness et al. <cit.> developed an ERT model that utilizes engineered features based on the underlying physics of the thermal process. Additionally, Mozaffar (2021) <cit.> proposed a geometry-agnostic data-driven model using (Graph Neural Network) GNNs to capture spatiotemporal dependencies of thermal responses in AM processes. However, current ML models are typically mesh-dependent, which can hinder their adoption in applications with finer meshes than those used for training. For example, models trained on low resolution meshes usually cannot be used to predict temperatures in the high-resolution meshes necessary for in-situ process control. Furthermore, although these models achieve low normalized mean squared error (MSE) over the built part, they may fail to accurately capture the dramatic temperature changes in the heat-affected zones (HAZ) of recent depositions. For instance, experiments in <cit.> have shown that the difference between the predicted temperature and ground truth in HAZ can be as large as 300^∘ C. Given the close relationship between HAZ temperature and melt-pool dimensions, defect formation, and microstructure formation, accurate HAZ temperature prediction is critical. However, commonly used metrics, such as MSE, may not accurately reflect a model's performance in predicting temperatures in these critical areas, as areas far from the heat source often have stable low temperatures without significant changes. Therefore, even if an ML model predicts low accuracy at the HAZ of recent depositions, it may still score well in MSE-based fidelity measurements.
This paper proposes a data-driven model that utilizes a Fourier Neural Operator to capture the local temperature evolution during additive manufacturing (AM). In addition to the current temperature, the model incorporates the heat source locations and local distances to the cooling surfaces to predict the temperature in the next time step. The contributions of this paper are summarized as follows:
* A mesh-independent ML framework is established for temperature prediction in the AM process.
* The ML model prioritizes capturing the temperature evolutions in the Heat Affected Zone (HAZ) near the recent deposition.
* An automatic pipeline is build to generate a dataset of geometric models using an autoregressive generative model with customized and existing tools, which are then meshed and have toolpaths generated using code developed by the authors.
A high-fidelity thermal simulation model based on Discontinuous Galerkin Finite Element Method is then applied to obtain temperature histories for use in ML training and testing. The physical coefficients and parameters used in the numerical method have been calibrated with experimental data, as demonstrated in <cit.>.
In addition, an R-squared (R^2) metric is proposed to measure the performance of ML models for temperature prediction in the AM process, as it provides a relative measure of the model's performance compared to using the mean temperature as a prediction. Results from numerical experiments demonstrate that the proposed model achieves high fidelity as measured by R^2 and maintains generalizability to geometries that were not included in the training process. The model and relevant dataset generation methods developed in this paper have the potential to be implemented in the creation of practical thermal simulation software for industrial applications.
Figure <ref> provides an overview of the proposed framework. The remainder of the paper is organized as follows. Section <ref> briefly introduces the concept of Fourier Neural Operator (FNO) and its application in approximating the solution of partial differential equations (PDEs). In Section <ref>, we detail the architecture of our ML model, the loss function and evaluation metrics, and provide a description of the heat-affected windows. Section <ref> explains the data generation and preprocessing process for training the ML model and outlines the experiment settings. We present and discuss the experimental results in Section <ref>, and conclude in Section <ref>.
§ THE BACKGROUND OF FOURIER NEURAL OPERATOR
A nonlinear operator is defined to be a mapping from a space of functions into another space of functions. Similar to the well-known universal approximation theorem that states neural networks can approximate any continuous function to arbitrary accuracy if no constraint is placed on the width and depth of the hidden layers <cit.>, there is another approximation theorem states that neural networks can accurately approximate any nonlinear operator <cit.>. Such neural networks are called neural operators.
In the meantime, partial differential equations (PDEs) that describe physical phenomena can be viewed as nonlinear operators that map initial conditions (functions defined in Euclidean spaces) to solutions (functions in Euclidean space, with or without time). As a result, a line of research has emerged that utilizes neural operators to approximate the solutions of an entire family of PDEs <cit.>.
In the context of predicting temperatures during an AM process, simulation and experimental data may be available in different resolutions. Unlike methods based on Convolutional Neural Networks (CNNs) that approximate PDE solutions in discretized Euclidean spaces, which are dependent on the mesh, neural operators can learn solutions with super-resolution. Neural operators trained on a low-resolution mesh can therefore be used to evaluate on a high-resolution mesh. They can thus readily be used to overcome discrepancies between the resolution of simulation and experimental data.
Neural operators are therefore a suitable tool to model and predict the AM process.
The FNO <cit.> is a recently developed type of neural operator that utilizes the fast Fourier transform (FFT) to achieve nlog(n) time complexity, in contrast to the quadratic complexity of other neural operators, like the neural operator proposed in <cit.>. In addition, FNO also features a noise-filtering mechanism brought by spectral analysis. While a brief introduction is provided here, readers are referred to <cit.> for a more detailed review.
Let D be a bounded open subset of ℝ^d, where d is the dimension of the Euclidean space. Let 𝒜=𝒜(D; ℝ^d_a) and 𝒰=𝒰(D; ℝ^d_u) be separable Banach spaces of functions that take values in ℝ^d_a and ℝ^d_u, respectively. Typically, 𝒜 represents the space of input functions (such as initial conditions), while 𝒰 represents the space of output functions (i.e., solutions to PDEs).
Consider a non-linear operator G^†: 𝒜→𝒰 that maps input functions to output functions. Suppose we have a set of N input-output function pairs {a_j,u_j}_j=1^N observed from 𝒜 and 𝒰, respectively, where the set of a is selected as an independent and identically distributed sequence from the probability measure μ, and u = G^†(a). The goal is to approximate G^† by constructing a parametric non-linear operator G_θ: 𝒜→𝒰, where θ∈Θ denotes the set of parameters. To this end, we define a cost function C: 𝒰×𝒰→ℝ and seek to minimize the expected value of this cost function over the input function space, i.e.,
min_θ∈Θ𝔼_a ∼μ[C(G_θ(a),u)].
It is important to note that in practical applications, we often only have access to the point-wise evaluations of the functions a and u obtained from simulations or experiments. To simplify notation, in the following discussion, we will use a and u to refer to these numerical observations, where a ∈ℝ^n× d_a and u ∈ℝ^n× d_u, with n being the number of sample points used to discretize the domain D.
As a non-linear operator, G_θ is mesh-independent or super-resolution, which means that it can be used to evaluate functions at positions that are not in the sample points.
Directly parameterizing the non-linear operator G^† can be challenging. A potential solution is to use the idea of contracting neural networks, which approximate any continuous functions by breaking down the calculation into multiple layers and combining a linear transformation with a non-linear activation function in each layer. Similarly, a non-linear operator can be approximated using a series of iterative updates, where each update consists of a global linear operator and a local non-linear activation function.
The computational efficiency of neural operators is of great importance for their practical applications, as the complexities of global linear operators often lead to significant challenges in achieving efficient computation <cit.>. In recent years, the Fourier neural operator has emerged as a promising approach to address this issue. By implementing convolution as the global linear operator and leveraging the fast Fourier transform (FFT), this neural operator achieves nlog(n) time complexity.
Convolution between two function g: D →ℝ^d_v and f: D →ℝ^d_v results in a new function g ∗ f : D →ℝ^d_v, which is defined as
g ∗ f = ∫_D g(x-y)f(y)dy.
This mathematical operation can be interpreted as a linear operator that transforms a function f by performing a global integral with another function g.
We can parameterize convolution by using a family of parametric kernel functions k_ϕ, where ϕ belongs to a given set of parameters Θ. With this parameterization, we can define a global linear operator as follows:
k_ϕ∗ f(x):= ∫_D k_ϕ(x-y)f(y)dy,
where f and k_ϕ are functions defined over a domain D. This operator transforms a function f using a global integral with the kernel function k_ϕ.
The FNO calculation process can be summarized as follows. First, the input a ∈𝒜 is lifted to a higher dimension using a local linear transformation P: ℝ^d_a→ℝ^d_v, such that v_0(x) = P(a(x)). Next, a series of iterative updates is applied, generating v_0 ↦ v_1 ... ↦ v_T, where each v_h takes value in ℝ^d_v. Finally, the output u(x) = Q(v_T(x)) is projected back by a local linear transformation Q: ℝ^d_v→ℝ^d_u.
The iterative update is defined as
v_t+1(x) := σ( W v_t(x) + k_ϕ∗ v_t(x) ), ∀ x ∈ D,
where W:ℝ^d_v→ℝ^d_v is a linear transformation, σ: ℝ→ℝ is a local non-linear activation function.
To enable efficient computation of the convolution operation in Equation <ref>, the Fourier transform is utilized. Specifically, the Fourier transform of a function f: D →ℝ^d_v is denoted as ℱ(f), and its inverse is denoted as ℱ^-1(f). By applying the convolution theorem, the Fourier transform of a convolution of two functions can be expressed as the component-wise product of their Fourier transforms. Thus, we have the following expression for the convolution operation in Fourier space:
k_ϕ∗ v_t(x) = ℱ^-1(ℱ(k_ϕ) ·ℱ(v_t))(x), ∀ x ∈ D,
where · denotes component-wise multiplication.
Since only finite point-wise evaluations of the function v_t are available, the modes of v_t in Fourier space are finite. In order to filter out noise in frequency, only low frequency modes are retained for the neural operator. Let ξ_max denote the number of the modes left after filtering.
Moreover, rather than constructing a family of parametric functions for k_ϕ, a more convenient approach is to directly parameterize k_ϕ in Fourier space. Consequently, the parameterized linear operator is given by Equation <ref>.
k_ϕ∗ v_t(x) = ℱ^-1(R ·ℱ(v_t))(x), ∀ x ∈ D,
where R is a complex-valued (ξ_max× d_v × d_v)-tensor whose components are the parameters of the linear operator.
When the domain D is discretized uniformly, the FFT algorithm can be applied to efficiently calculate Equation <ref> with a complexity of O(nlog n).
§ METHOD
The architecture of the ML model is shown in Figure <ref>. Note that the ML model does not operate on the whole domain at once, but on the smaller regions, called heat-affected windows, introduced in Section <ref>
§.§ Heat-Affected Windows
Thermal simulations of additive manufacturing processes involve the solution of transient heat transfer partial differential equations (PDEs). Often elements are activated sequentially according to a toolpath prescribed on a pre-defined mesh. For a model meshed with equally-sized elements, the time step Δ t between successive element activation is then determined by the element size and the tool's moving speed.
When a new element is activated, the PDEs are solved with the temperature prior to activation and boundary conditions, such as heat influx, convection, and fixed temperature, as input. The output is the temperature after Δ t. We have observed that over a short time period, the evolution of temperature is primarily confined in a small region. Thus, to predict the temperature at a specific position after Δ t, we only need the information of its local neighbor region, not the whole domain. In heat transfer analysis, the thermal diffusivity α_p characterizes a material's rate of heat transfer. For instance, for steel, α_p is approximately 12 mm^2/s. This means that for Δ t=0.1 s, the area of the neighborhood affecting the temperature of a given position is about 1.2 mm^2. Therefore, we define the heat-affected neighborhood's characteristic radius as
r_c = √(α_p Δ t).
We can choose a box region around a position whose size is about 10 times r_c to ensure that most of the necessary information is included in the box to predict the temperature for the position. These box regions are called heat-affected windows or windows in this paper.
In this study, the ML model is designed to operate solely on the heat-affected windows rather than the entire domain. This approach has two significant advantages. Firstly, by reducing the number of parameters, the size of the ML model can be significantly reduced, thereby requiring less training data. Secondly, this technique enhances the generalizability of the model for a wide variety of geometries, even with a limited training dataset. This is because two different geometries may share similar local features despite appearing vastly different. For example, Figure <ref> shows two different shapes but their heat-affected windows around the boundary look similar as both of their boundaries consist of plane surfaces.
§.§ Input Variables
In addition to the input temperature, the ML model is also provided with other relevant information related to the heat transfer process for each element.
Activation Indicator. Heat affected windows located near the edges of the part may include void space where no material is present. To handle this case, we introduce a variable ρ_act that indicates whether an element is activated. Here, a value of 1 means that there is material, while a value of 0 means that there is a void.
Heat Influx Conditions. The vector 𝐇 contains two pieces of information relevant to the heat influx condition for an element: the power of the input energy and the relative position from the center of the element to the heat influx position.
Boundary Impact Factors. The simulation considers two types of boundary conditions: convection-radiative and fixed-temperature conditions. The former is applied on the outer surface of the built part, while the latter is applied on the substrate bottom which the part is built on. The substrate bottom temperature is set to room temperature. To describe how each boundary condition affects local temperature evolution, boundary impact factors (BIF) 𝐁 are defined for each element. 𝐁 has two components corresponding to the two types of boundaries. Although there could be more complicated ways to define 𝐁, a simple distance-based approach is used in this work; namely, the distance between the element center and each type of boundary.
§.§ Loss Function And Evaluation Metrics
We use the normalized L_2 error as the loss function for training our model, denoted as NL_2. Let u_pred be the predicted temperature over a window, and u be the ground truth temperature of the window. u_pred_i and u_i represent the predicted and ground truth temperature of an individual element, respectively. The window is uniformly discretized with n elements. NL_2 is defined as
NL_2 = ∑_i=1^n √((u_pred_i - u_i)^2)/|u_i|.
The mean squared error (MSE) is a commonly used metric for evaluating machine learning models. It is defined as the average of the squared differences between the predicted and true values, calculated as:
MSE = 1/n∑_i=1^n (u_pred_i - u_i)^2.
To assess the performance of ML models that make predictions on varying scales, the normalized root mean squared error (NRMSE) is often used. It is defined as:
NRMSE = √(1/n∑_i=1^n (u_pred_i - u_i/u_i )^2 ).
Here, the NRMSE takes into account the relative magnitude of the temperature values by normalizing the squared differences with respect to the ground truth temperature.
While MSE and NRMSE can reflect the overall accuracy of the ML model in many cases, they might overestimate the performance of ML models for temperature prediction of AM processes. This is because most of the domain does not experience significant temperature changes except in regions near recent deposition in a time step. Thus, a prediction that fails to capture temperature changes in these regions may still have a good MSE or NRMSE score. To address this issue, we propose to use the R^2 metric to evaluate the ML models, which measures the proportion of the variance in the ground truth that is explained by the model's predictions. It is defined as
R^2 = 1 - ∑_i=1^n (u_pred_i - u_i)^2/∑_i=1^n (u_mean - u_i)^2,
where u_mean is the average of u over the window.
R^2 takes value in (-∞,1]. A negative R^2 value indicates that the model's predictions have a higher mean squared error (MSE) than a simple baseline predictor that uses the mean temperature as the prediction. When R^2 is close to zero, it suggests that the model's predictions are similar to those of the baseline predictor. Conversely, as R^2 approaches 1, the accuracy of the model's predictions improves, indicating a better fit between the model and the observed data.
§ DATASET GENERATION
We developed a dataset that consists of the geometric models created by a generative ML model, and a high-fidelity finite element simulation is employed to get the temperature history of all geometric points.
§.§ Geometric Model Creation
SkexGen <cit.> is an autoregressive generative model based on transformers that encode topological, geometric, and extrusion variations of CAD model construction sequences into disentangled code books <cit.>. With SkexGen, we can randomly generate geometric models with prescribed topological and extrusion features in various geometric details.
In practice, we can use this CAD generative model to augment a dataset in a limited amount of time by generating synthesized models that share specific similarities with the existing dataset.
Figure <ref> shows the ten models created by SkexGen. Each row is generated with the same extrusion code but with different topologies and geometries. The models in the first row all have carved features in their upper region, while those in the second row are either stacked structures or contain cylindrical holes.
While it is trvial to randomly generate hundreds of geometric models, the scale of the dataset used here is bottle-necked by the intensive computation required by the thermal simulation.
The dimensions of the geometric models are normalized into a similar level before meshing. For all the models, the largest dimensions is set as 40 mm.
§.§ Thermal Simulation
Consider a domain Ω bounded by its boundary ∂Ω, of which ∂Ω_H represents the part at which heat is transferred to the surroundings with constant temperature T_∞, and ∂Ω_D the part at which the temperature is fixed at T_D. The temperature evolution within Ω is governed by the following set of PDEs:
ρ c_p Ṫ = ∇· (k_p ∇ T), ∀𝐱∈∂Ω
-𝐧· k_p ∇ T = h_c(T-T_∞), ∀𝐱∈∂Ω_H
T = T_D, ∀𝐱∈∂Ω_D.
Here, ρ, c_p and k_p are the temperature-dependent density, specific heat capacity and conductivity of the material, respectively. The vector 𝐧 is the unit outward normal of the boundary at coordinate 𝐱. In Equation <ref>, h_c is a temperature-dependent heat transfer coefficient that accounts for free convection with convection coefficient h_∞=15 W/(m^2K) and radiation with emissivity ϵ=0.35 as
h_c(T) = ϵσ_b( T^3+T^2T_∞ +T T_∞^2 +T_∞^3 ),
where σ_b is the Stefan-Boltzmann constant.
We utilized the thermal simulation algorithm developed in <cit.> to efficiently and accurately solve the partial differential equations described in Equation <ref> for the DED process. This algorithm uses the discontinuous Galerkin finite element method (DGFEM) to spatially discretize the problem and the explicit forward Euler timestepping scheme to advance the solution in time. The algorithm activates elements based on the predefined toolpath. Newly-deposited elements are initialised at elevated temperature, after which they are allowed to cool according to Equation <ref>. To ensure that the high process heat input is captured correctly, newly-deposited elements are assigned an enhanced heat capacity prior to their solidification.
Our simulations utilized S355 structural steel as the material, with material properties as given in <cit.>. The activation temperature of newly-added elements was set to 1750^∘ C and the enhanced specific heat capacity to 4537.9 J/(kgK). The temperature of the substrate’s bottom face is kept fixed at T_∞ =25^∘ C. On all other faces, convection and radiation to the surrounding air at T_∞ is modelled using Equation <ref>. We set the tool moving speed to 5 mm/s. All geometric models were discretized with a resolution of 20×20×20, with an element size of 2 mm.
§.§ Data Preprocess and Training Setting
An Efficient software for data preprocessing of thermal simulation have been developed by the authors to generate hexahedron meshes from geometric models created by SkexGen. It can add a coarse mesh for the substrate. Additionally, a toolpath that specifies the locations of the deposition tool in a sequence of time steps can be constructed based on the process parameters. The code is publicly available on GitHub [https://github.com/Jiangce2017/hammer_chuizi.githttps://github.com/Jiangce2017/hammer_chuizi.git]. An example of the mesh and toolpath generated by the code is shown in Figure <ref>.
The simulation generates the temperatures of all activated elements at each time step. Let E = e_0, e_1, ..., e_m be the set of elements ordered by their activation sequence, where e_i is the element deposited at time step t_i. The temperature at time step t_i serves as the input to predict the temperature at time step t_i+1, which is calculated by the numerical simulation and serves as the ground truth for the prediction. To focus the ML model on the temperature evolution in the HAZ near the most recent deposition, heat-affected windows with dimensions 11× 11× 11 are constructed around the deposited elements. Specifically, windows around the elements e_i-9, e_i-8, ..., e_i are initially selected to train the ML model. Subsequently, more windows can be included to allow the ML model to predict the temperature of the entire domain. By doing so, we can let the ML model prioritize on the HAZ of the recent deposition.
Table <ref> provides the number of windows for each geometric model. To maximize the use of our limited dataset, we performed a procedure similar to k-fold cross-validation to evaluate the generalizability of the ML model to unseen geometries during training. We conducted 10 rounds of training and validation, with each round using a different geometry for validation and the remaining nine for training and testing. The windows from the nine geometries were mixed and randomly divided into two sets: 90% for training and 10% for testing. We trained the model for 50 epochs using Adam as optimizer with a learning rate of 1× 10^-3 and weight decay of 1× 10^-4. After training, we used the ML model to predict the temperatures of the validation geometry.
§ RESULTS AND DISCUSSION
Table <ref> displays the training and test metrics after 50 training epochs of each cross-validation round. The MSE, NL_2, and R^2 values are the average quantities across all windows in the training, test, or validation dataset. The high degree of consistency between the training and testing metrics indicates that the ML model is capable of accurately capturing temperature evolution without overfitting. It should be noted that the ground truth and prediction temperatures used for evaluation are their original values without normalization, and the temperature unit is Celsius. The relatively large variance of MSE is likely due to the variance of temperature responses in the AM process across different geometries.
Figure <ref> shows the histories of these metrics during the training process. MSE and NL_2 decrease rapidly in all rounds, and R^2 quickly approaches 1, indicating that the optimizer converges successfully. MSE provides a straightforward way to describe the average error square between the prediction and ground truth. For example, the test MSE of No.1 round converges to about 100, indicating that the average absolute difference between prediction and ground truth is approximately 10^∘ C which is an acceptable error as the highest temperature could achieve 1500^∘ C in the simulation of AM process.
NL_2 measures a comparative error that is divided by the value of the ground truth. This is why the cross-validation rounds approach similar values of NL_2 at the end. However, interpreting the ML model's performance in capturing the dramatic temperature evolution from NL_2 and MSE is challenging if most windows are far from the recent deposition and do not have significant temperature changes.
R^2 reveals the relative accuracy of the prediction compared with using the mean of ground truth as the prediction. In other words, it measures the proportion of the variance in the ground truth that the prediction explains. The fact that the test R^2 in all rounds is above 0.99 suggests that the model can capture the extreme local temperature variance.
Table <ref> lists the performance of the ML model on the validation geometric models, which are unseen in the training process.
The ML model shows good generalizability in 7 out of 10 geometries, but fails significantly in 3 of the 10. Figure <ref> visualizes the prediction results of the validation on geometric model 5. 10 windows are randomly selected for each row. At each row, the ground truth, the predicted temperature, the difference between the prediction and the temperature, and the error percent distribution over voxels of the window are shown. As we can see, the model can predict the temperature precisely for this validation geometric model, with errors ranging within 3% for most of the windows. The largest errors tend to appear near the recent deposition, with temperatures there being slightly overestimated by the ML model.
The ML model appears to fail completely at predicting the temperature for geometric models 6, 7, and 8, despite performing well in learning their local temperature evolution when included in the training process. To investigate how prediction errors are distributed over the windows, we analyzed the R^2 of the 10 windows with the worst predictions (lowest R^2) in each cross-validation round, as shown in Table <ref>. Among approximately 10 thousand windows, only a few poorly predicted windows can adversely affect the average prediction.
Figure <ref> depicts the predictions of 10 randomly selected windows from the cross-validation rounds on geometric model 6, which show a similar level of accuracy as those in Figure <ref>. Therefore, despite the three failed cross-validation rounds, most of the windows have reliable predictions. These observations align with the findings reported by <cit.>, which suggests that most points in the parts built by the AM process experience repetitive and similar temperature histories.
This also suggests that the three geometries possess unique local features that are not present in the other geometries used in training. Thus, the current dataset may be insufficient for covering the range of common geometric models used in practice. Additionally, the large errors in these cases indicate that the ML model may overfit to the training geometries if certain representative geometries are not included. To overcome these limitations, we suggest building a larger dataset that includes comprehensive geometric features of parts built by the AM process.
§ CONCLUSION
This paper presented a data-driven model that uses Fourier Neural Operator to capture the local temperature evolution during the additive manufacturing process. To prepare the training data, we employed an automatic pipeline that uses an autoregressive generative model, SkexGen, to randomly generate a diverse set of CAD models with variations in topological, geometric, and extrusion features. The toolpath and hexahedral mesh for finite element method (FEM) analysis were then generated using a code we developed. The resulting data was used to run high-fidelity, physics-based simulations using DGFEM. The simulations produced ground truth data which was then used to train the ML model.
Our experiments demonstrate that the proposed ML model can accurately capture local temperature changes, irrespective of the geometry. The R^2 metric reveals that the model can precisely predict the large variance of temperature distributions in heat-affected zones near recent depositions. However, our experiments also highlight certain limitations. Cross-validation experiments reveal that the model may fail on geometric models that are significantly different from those used in the training dataset, indicating overfitting to the training data. This may be due, in part, to the relatively small size of the current geometric model dataset. Future work will focus on creating a larger dataset to mitigate this issue. Additionally, our method currently uses only one type of toolpath and consistent tool moving speed. Inclusion of various toolpaths, power, and tool speeds is critical to improve model performance, and will be considered in future work. The architecture of the ML model would need to be modified if more external parameters are required as the dimensions of the input change. However, the total model does not need to be re-trained from scratch. With transfer learning methods <cit.>, the layers that connect with the input layer need to be replaced and be fine-tuned with other layers on the new data. In addition, to enhance the fidelity of the ML model and accelerate the learning process, the physics-informed neural networks <cit.> and model order reduction methods <cit.> could be included into our framework.
§ ACKNOWLEDGEMENTS
This material is based upon work supported by the National Science Foundation through Grant No. CMMI-1825535 and by Carnegie Mellon University’s Manufacturing Futures Institute through the MFI Postdoctoral Fellowship Program. Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the sponsors.
unsrt
|
http://arxiv.org/abs/2307.03075v1
|
20230706154514
|
Categorified Path Calculus
|
[
"Simon Burton"
] |
quant-ph
|
[
"quant-ph",
"math.CT"
] |
Primordial black holes from a curvaton scenario with strongly non-Gaussian perturbations
[
========================================================================================
Path calculus, or
graphical linear algebra, is a string diagram calculus for
the category of matrices over a base ring.
It is the usual string diagram calculus for a symmetric monoidal
category, where the monoidal product is the direct sum of matrices.
We categorify this story to
develop a surface diagram calculus for
the bicategory of matrices over a base bimonoidal category.
This yields a surface diagram calculus for any bimonoidal category
by restricting to diagrams for 1× 1 matrices.
We show how additional structure on the base category,
such as biproducts, duals and the dagger, adds structure to the
resulting calculus.
Applied to categorical quantum mechanics this yields a
new graphical proof of the teleportation protocol.
§ INTRODUCTION: UNIFYING THE OLD AND NEW QUANTUM
What is an amplitude?
Feynman's account of quantum mechanics
epitomizes the particle physicist's conception of amplitudes:
“counting” the paths from a source to a detector (<cit.> 3.1).
The rules for such path counting are
(i) parallel paths add, and (ii) serial paths multiply:
2 + 3 =
-0.4
< g r a p h i c s >
= 5,
2 × 3 =
-0.4
< g r a p h i c s >
= 6.
This is a cartesian, or classical, viewpoint on amplitudes,
and would correspond to a classical dynamical
system with uncertainty such as the Galton board,
if it wasn't for the Born rule.
Path calculus or
Graphical linear algebra neatly encapsulates this
correspondence between linear algebra and path counting
as a bimonoid in a monoidal category <cit.>.
Around the same time Feynman was giving his lectures,
John Bell was initiating the study of non-locality in quantum
physics, otherwise known as entanglement <cit.>.
This also has a categorical interpretation as
adjointness (duals) in a compact closed category <cit.>.
Perhaps this is why particle physicists (before Bell)
didn't notice entanglement: infinite dimensional
Hilbert spaces don't have adjoints (duals).
There is a fundamental incompatibility here between the
(bi-)cartesian monoidal structure of the old quantum,
and the multiplicative (tensor) monoidal structure of the new quantum.
In this work we resolve this conflict by categorification
or 2-linear algebra <cit.>.
We consider this work as
part of 2-categorical quantum mechanics initiated by
Vicary <cit.>.
In section <ref> we present a tour of the graphical
calculus as applied to categorified matrices.
This is meant to be as concrete as possible.
We show how additional structure on the base category,
such as biproducts <ref>, duals <ref> and the dagger <ref>,
is represented in the resulting calculus.
Applied to categorical quantum mechanics this yields a
new graphical proof of the teleportation protocol <ref>.
We also show how this graphical calculus relates to
similar calculi in the literature <ref>.
Finally, section <ref> is a recapitulation of
the main ideas, from a more abstract perspective.
§ A USER'S GUIDE TO CATEGORIFIED LINEAR ALGEBRA
§.§ Review of path calculus
As a perceptual warmup, we review path calculus, or graphical linear algebra,
using conventions that fit with the development below.
At an elementary level, the story is about the coincidence
between path-counting and matrices with natural number entries.
A matrix with m rows and n columns corresponds to
a path diagram with m outputs, and n inputs.
We draw diagrams algebraically, from right to left.
diagram n_n nn_n n_nn nn swap
matrix
[ 1 ]
[ 1; 1 ]
[ 1 1 ]
[ 1 0; 0 1 ]
[ 0 1; 1 0 ]
This last diagram we call a swap.
Horizontal composition of diagrams corresponds to matrix
multiplication:
nn_n_nn
[ 1; 1 ][ 1 1 ] = [ 1 1; 1 1 ]
n_nn_n
[ 1 1 ][ 1; 1 ] =[ 2 ]
We are drawing these diagrams skewed because we are saving the third dimension
for later. The coordinate system looks like this:
coords
Stacking diagrams into the page corresponds to direct sum of matrices:
nn nn_nan_nn
[ 1 ]⊕[ 1 ]=
[ 1 0; 0 1 ]
[ 1; 1 ]⊕[ 1 1 ]=
[ 1 0 0; 1 0 0; 0 1 1 ]
Empty matrices come in different sizes, which we notate using subscripts:
_n n_
[ ]_0,1
[ ]_1,0
[ ]_0,0
The matrix with zero rows and zero columns has the empty diagram.
Using these building blocks, we have equations which we render using the vertical
dimension:
l-unit l-unit
r-unit r-unit
assoc assoc
comm comm
and the horizontal opposites:
l-counitl-counit
r-counitr-counit
coassoccoassoc
cocommcocomm
Finally the equations:
bimonoidbialgebra
comul-unitcomul-unit
counit-mulcounit-mul
counit-unitcounit-unit
The punchline of graphical linear algebra is that using this equational
diagrammatic theory we can recover the theory of matrices with natural number entries.
(Strictly speaking, we also need isotopy equations for the swaps.)
We can go a bit further with this correspondence.
Given any semi-ring, or rig, (R,+,×,0,1) we introduce the
diagrammatic generators:
hom
for each r∈ R.
These satisfy equations
unit-hom unit-hom
mul-hom mul-hom
counit-hom counit-hom
comul-hom comul-hom
add add
zero zero
mul mul
one one
as well as equations for interacting with the swaps.
As a first step in our categorification journey we notice that
the natural numbers, as a semi-ring or rig:
(, +, ×, 0, 1)
can be replaced, or categorified, by the category
of finite dimensional vector spaces over a field, .
This category has enough structure to mimic the
addition and multiplication of natural numbers:
(, ⊕, ⊗, O, I),
where O is a fixed zero dimensional vector space, and I is a fixed
one dimensional vector space.
The equational axioms above are replaced by specific isomorphisms,
which then satisfy further equations.
This leads to the theory of Kapranov-Voevodksy 2-vector spaces.
However, in the following section we will proceed in more generality,
replacing with an arbitrary bimonoidal category .
§.§ Categorified linear algebra
Given a semi-ring or rig, (R,+,×,0,1)
we can form the category of matrices (R)
over this rig, whose objects are natural numbers,
morphisms m← n are m× n matrices with entries in R
and composition is given by matrix product.
We now wish to replace the rig R with a category that has
enough structure to perform rig-like operations on the objects of .
A bimonoidal category is a category
equipped with an additive symmetric monoidal structure,
(, ⊕, O, α^⊕, λ^⊕, ρ^⊕, σ^⊕)
and a multiplicative monoidal structure,
(, , I, α^, λ^, ρ^)
such that distributes over ⊕
via natural isomorphisms:
δ^l_A,B,C : A(B⊕ C) → (A B)⊕(A C)
δ^r_A,B,C : (A⊕ B) C → (A C)⊕(B C)
λ_A : O A → O
ρ_A : A O → O
called respectively the left- and right-distributors, and
the left- and right-nullitors.
All these structures satisfy coherence equations <cit.>.
We denote _0 as the set of objects of , and _1 as
the set of morphisms of .
We now consider the bicategory of matrices over a bimonoidal category .
Important examples are, ={𝚝𝚛𝚞𝚎, 𝚏𝚊𝚕𝚜𝚎}
the poset of truth values considered as a bimonoidal category
(⊕ and are disjunction and conjunction), in which
case is the bicategory of finite sets and relations,
and =FdVec_𝕂 the bimonoidal category of finite dimensional vector spaces over
a field 𝕂. This case gives as the bicategory of
Kapranov-Voevodksy 2-vector spaces <cit.>.
In this section we dive into example calculations in , learning on-the-job.
See section <ref> for a more formal treatment, and
the reference <cit.> chapter 8, for an exquisitely detailed
definition of .
As a bicategory, has objects the natural numbers ={0,1,2...},
morphisms m← n are m× n matrices of objects of ,
and 2-morphisms
m n
are m× n matrices of morphisms of .
We call these the 0-cells, 1-cells and 2-cells
respectively.
We render n× m matrices in with m incoming
surfaces and n outgoing surfaces. Here are some 1-cells built
using the object I∈_0:
-0.4
< g r a p h i c s >
-0.4
< g r a p h i c s >
-0.4
< g r a p h i c s >
[I]
[I I]
[ I; I ]
Where the anonymous grey surface is the 0-cell 1∈.
These surface diagrams flow from right-to-left,
which matches the usual algebraic notation.
Horizontal composition is categorified matrix multiplication:
instead of using +,× of a rig R, we use
the bimonoidal structure ⊕, of .
Here we show the horizontal composition of some 1-cells:
-0.4
< g r a p h i c s >
-0.4
< g r a p h i c s >
[I I] [ I; I ] = [I I ⊕ I I]
≅ [I⊕ I]
[ I; I ] [I I] =
[ I I I I; I I I I ]≅[ I I; I I ]
We render the direct sum of matrices
by layering. Given objects A, B ∈_0,
-0.4
< g r a p h i c s >
[A] ⊞ [B] := [ A O; O B ]
The 2-cells of are rendered vertically in the upwards direction.
We horizontally compose 2-cells in the same way as 1-cells:
using the monoidal structure ,⊕ of .
For example, given morphisms f:A→ B and g:C→ D in
, we can build 1× 1 matrix 2-cells [f]:[A]→[B]
and [g]:[C]→[D], forming the horizontal composition and direct sum,
[f][g] =
-0.4
< g r a p h i c s >
[f]⊞[g] =
-0.4
< g r a p h i c s >
On the left we see that the grey surface supports the usual
string diagram calculus for the monoidal category
(, , I, α^, λ^, ρ^);
all we do is forget the distinction between a 1× 1
matrix 1-cell or 2-cell and its unique entry.
The 0× 0 matrix [ ]_0,0 has no entries and corresponds
to the empty surface diagram.
This is a strict unit for the direct sum:
M⊞ [ ]_0,0 = [ ]_0,0⊞ M = M.
The 1× 0 matrix [ ]_1,0 also has no entries, but
participates in the direct sum by inserting
a zero row, for example [A]⊞ [ ]_1,0 = [ A; O ].
-0.4
< g r a p h i c s >
-0.4
< g r a p h i c s >
≅-0.4
< g r a p h i c s >
[ ]_1,0
[I I] [ I; O ] = [I I ⊕ I O] ≅ [I]
We are being strict about the distinction between
equality on-the-nose “=” and isomorphism “≅”.
These isomorphisms are 2-cells, and are built by composing morphisms
in that come from the bimonoidal structure of .
In fact, there will be an isomorphism 2-cell corresponding
to each of the equations in the presentation
of the path calculus above <ref>.
The isomorphism 2-cells for (r-unit) and (mul-hom) are
rendered as
-0.4
< g r a p h i c s >
-0.4
< g r a p h i c s >
respectively.
These are examples of coning:
forming a higher dimensional cone over two given string diagrams.
Being isomorphisms, these 2-cells obey equations.
For (r-unit) we have
-0.4
< g r a p h i c s >
=
-0.4
< g r a p h i c s >
-0.4
< g r a p h i c s >
=
-0.4
< g r a p h i c s >
The inverse isomorphism is rendered as the
vertically opposite diagram.
The isomorphism equations for (mul-hom)
we call
pulling a string [A] onto a pair of surfaces:
-0.4
< g r a p h i c s >
=
-0.4
< g r a p h i c s >
-0.4
< g r a p h i c s >
=
-0.4
< g r a p h i c s >
And this isomorphism is 2-natural:
-0.4
< g r a p h i c s >
=
-0.4
< g r a p h i c s >
Similar pulling equations hold for the horizontal opposite.
The horizontal composition of 1-cells given by
-0.4
< g r a p h i c s >
[I I] [ A O; O B ][ I; I ]
is isomorphic to the 1-cell [A⊕ B].
Once again, we record this isomorphism as a 2-cell, which
is the categorified path calculus rule (add):
-0.4
< g r a p h i c s >
Notice we have recovered the “internal sum” ⊕ of by performing
“external sum” ⊞ and horizontal compositions in .
This is the key idea behind this calculus: we have enlarged the domain of
definition to , a form of categorification, but then looking down from
above, we see certain expressions evaluate to itself.
These are the microcosms discussed further in Section<ref>.
§.§ The symmetric monoidal structure
The direct sum 2-functor ⊞:× is a biproduct, and makes
into a symmetric monoidal bicategory (section <ref>).
The symmetry
swaps surfaces (0-cells) around using permutation matrices.
We render this using a dotted line:
-0.4
< g r a p h i c s >
This symmetry is natural on 1-cells and 2-cells.
The naturality on 1-cells is exhibited by 2-cells:
-0.4
< g r a p h i c s >
and 2-naturality is the following
equation between 2-cells:
-0.4
< g r a p h i c s >
=
-0.4
< g r a p h i c s >
Corresponding to the path calculus rule (comm)
we have isomorphism 2-cell
-0.4
< g r a p h i c s >
and the horizontal opposite for (cocomm).
These are built from the unitors and nullitors of .
§.§ The horizontal associator
So far we haven't
used the additive symmetry σ^⊕, or
the left or right distributors δ^l and δ^r of .
These show up when we need associativity of the horizontal
composition (Definition <ref>).
Consider the following diagram,
-0.4
< g r a p h i c s >
We have many ways of horizontally composing this, and these
are related by horizontal associator isomorphisms,
built using the bimonoidal structure of .
For example,
we use the left-distributor to exhibit the associator isomorphism:
[A]([B C] [ I; I ])
≅( [A][B C] )[ I; I ],
[A(B⊕ C)] [A B ⊕ A C].
To see where we need the additive symmetry,
consider the horizontal composition of 1-cells:
-0.4
< g r a p h i c s >
[ I I ](
[ O I; I O ][ A O; O B ])
[ I; I ]≅[ I I ][ O B; A O ][ I; I ].
This expression now has two ways to be evaluated,
related by the additive symmetry of :
(
[ I I ][ O B; A O ])
[ I; I ]≅[ A B ][ I; I ]≅
[A⊕ B],
[ I I ](
[ O B; A O ][ I; I ])
≅[ I I ][ B; A ]≅
[B⊕ A].
Note this isomorphism [A⊕ B]≅[B⊕ A]
is an isomorphism 2-cell of .
§.§ Additive biproducts in the base category
When the additive structure in the base category
is a biproduct, we acquire a matrix calculus for morphisms of .
We use round brackets for this, such as the morphism
[ 1 1 ] : I⊕ I → I.
Promoting this to a 1× 1 matrix
in our surface diagram calculus yields the cap and cup:
-0.4
< g r a p h i c s >
-0.4
< g r a p h i c s >
Strictly speaking, we should be writing [I I⊕ I I]
instead of just [I⊕ I].
However, for the sake of clarity we
make this notational simplification.
Using this cap and cup we can define a trace
which counts the number of layers, such as
Tr(
-0.4
< g r a p h i c s >
) :=
-0.4
< g r a p h i c s >
=
[ [ 1+1 ] ].
We also have another cap and cup
-0.4
< g r a p h i c s >
-0.4
< g r a p h i c s >
that uses the zero maps 0_O,I:I O and 0_I,O:O I in .
All these caps and cups exhibit
-0.4
< g r a p h i c s >
, -0.4
< g r a p h i c s >
as ambidextrous adjoints:
-0.4
< g r a p h i c s >
=
-0.4
< g r a p h i c s >
, -0.4
< g r a p h i c s >
=
-0.4
< g r a p h i c s >
,
-0.4
< g r a p h i c s >
=
-0.4
< g r a p h i c s >
, -0.4
< g r a p h i c s >
=
-0.4
< g r a p h i c s >
.
and we therefore have a Frobenius algebra structure on I⊕ I (See <cit.>).
It is straightforward to verify this is also also a special symmetric Frobenius
algebra, or a classical bit <cit.>.
We also have morphisms such as
[ 1_A 1_A ] : A⊕ A → A
in . Once again,
promoting this to a 1× 1 matrix
in our surface diagram calculus yields the 2-cells:
-0.4
< g r a p h i c s >
-0.4
< g r a p h i c s >
Given morphisms f,g : A→ B, we can build the expression
[(f+g)] =
-0.4
< g r a p h i c s >
§.§ Duals in the base category
When has additive biproducts and duals (see <cit.> 3.3) then
acquires all adjoints, given by the dual transpose of matrices.
For example, given objects A,B,C,D∈_0
with duals A^*,B^*,C^*,D^*∈_0, exhibited by the counits and units
[ ϵ_A:A A^*→ I, η_A:I→ A^* A,; ϵ_B:B B^*→ I, η_B:I→ B^* B,; ϵ_C:C C^*→ I, η_C:I→ C^* C,; ϵ_D:D D^*→ I, η_D:I→ D^* D,; ]
we have the adjunction
[ A B; C D ]⊣[ A^* C^*; B^* D^* ]
in , with counit given by
-0.4
< g r a p h i c s >
and unit given by
-0.4
< g r a p h i c s >
The snake equations are rendered as
-0.4
< g r a p h i c s >
-0.4
< g r a p h i c s >
, -0.4
< g r a p h i c s >
-0.4
< g r a p h i c s >
§.§ Traces
When has multiplicative symmetry, additive biproducts, and duals
then we can build traces in .
Recall that the trace of an object A∈
is the composite
Tr(A) :=
I A^* A
A A^*
I
which is rendered as
Tr(
-0.4
< g r a p h i c s >
) :=
-0.4
< g r a p h i c s >
.
We could try to extend this diagram to arbitrary m× n
matrices over , but we immediately run into a problem
with the symmetry:
matrix products do not commute in general.
Already the products
[A B] [ C; D ] = [A C⊕ B D], [ C; D ] [A B] =
[ C A C B; D A D B ]
live on a different number of layers so we cannot
hope to swap them using a 2-cell isomorphism. [
Another solution is to use a different graphical calculus <cit.>]
The trick here is that
something like this does work “under the cap”.
To demonstrate this we prove the identity
(A⊕ B) = (A)+(B):
(
-0.4
< g r a p h i c s >
)
=
-0.4
< g r a p h i c s >
=
-0.4
< g r a p h i c s >
=(
-0.4
< g r a p h i c s >
) + (
-0.4
< g r a p h i c s >
).
§.§ 2-categorical quantum mechanics
In this section we let be compact closed with biproducts
and a compatible dagger structure <cit.>.
In this case
inherits a dagger structure (section <ref>).
This dagger is the identity on 0-cells and 1-cells,
contravariant on 2-cells, and squares to the identity.
For example,
given f:A→ B and g:B→ D in :
(
-0.4
< g r a p h i c s >
)^†
:=
-0.4
< g r a p h i c s >
.
A unitary 2-cell μ in
is an isomorphism whose inverse is μ^†.
Following <cit.>,
we define the qubit measurement, qubit preparation and
projective measurement
on A∈_0, as unitary 2-cells
of the form:
-0.4
< g r a p h i c s >
-0.4
< g r a p h i c s >
-0.4
< g r a p h i c s >
qubit measurement qubit preparation projective measurement
A similar definition holds for n-dimensional systems, here we have
rendered the case n=2.
It is a straightforward exercise to
show graphically that if a system A has an n-dimensional
measurement then A itself is n-dimensional in the sense of (A)=n.
The pulling equations are fundamental to 2-categorical quantum
mechanics. We see graphically how a
quantum system bifurcates onto classical surfaces:
-0.4
< g r a p h i c s >
=
-0.4
< g r a p h i c s >
-0.4
< g r a p h i c s >
=
-0.4
< g r a p h i c s >
-0.4
< g r a p h i c s >
=
-0.4
< g r a p h i c s >
We use these classical surfaces to define controlled operations
as follows.
Given unitaries U_1,U_2:A→ A in the controlled-U
operation is the 2-cell given by
-0.4
< g r a p h i c s >
A straightforward calculation using the pulling equations
shows that this is unitary.
A similar definition holds for higher arity control operations:
an n-ary control operation will have the A system pulled
onto n surfaces where the unitaries U_1,...,U_n can act.
Note that these unitary 2-cells are all primitive 2-cells in the graphical
calculus of <cit.>;
here we are seeing more of the internal structure involved in these constructions.
Recall that a teleportation protocol
on a system A∈_0
is a measurement μ on A A^* and a control
operation γ on A, such that:
-0.4
< g r a p h i c s >
15pt=1/√(n) -0.4
< g r a p h i c s >
Here we have labelled the blue region n to denote n copies of
the grey surface: n = ⊞_1^n 1,
and 1/√(n) denotes a morphism I→ I in .
A unitary error basis for a system A∈_0
is a sequence of unitaries U_1:A→ A,...,U_n:A→ A in _1
such that the 2-cell
μ =
1/√(n)-0.4
< g r a p h i c s >
is unitary.
(For clarity, we only show two surfaces, and also we “slide”
U_i onto the counit as U_i:A A^*→ I.)
This means that μ is
a measurement on A A^*.
If we further define γ as the controlled-(U_1^†,...,U_n^†) operation,
then we have the following teleportation protocol:
-0.4
< g r a p h i c s >
=
-0.4
< g r a p h i c s >
=
-0.4
< g r a p h i c s >
=
-0.4
< g r a p h i c s >
Our graphical calculus is fine-grained enough that
we can use it to construct unitaries themselves.
For example, we define the qubit Pauli X operation as the 2-cell
-0.4
< g r a p h i c s >
= -0.4
< g r a p h i c s >
.
This definition contains a crucial use
of the horizontal associator as discussed in <ref>.
§.§ Relation to other graphical calculi
We examine the sheet diagrams from <cit.>.
Given objects A, B, C, D and morphisms
f : A⊕ A B → C,
1_C : C→ C,
g : B D → A⊕ D
from a bimonoidal category ,
the morphism f 1_C g: is rendered:
< g r a p h i c s >
Using our surface diagram calculus the same morphism
f 1_C g is rendered as,
-0.4
< g r a p h i c s >
All the strings here are labelled with 1× 1 matrices
so we neglect the bracket notation, such as [A], etc.
Up to reversing the front-to-back ordering of layers,
the sheet diagram is seen to correspond to a normal form for
the more general surface diagrams. The general procedure
is to expand the bubbles, or pull strings onto the bubbles.
For example, we pull the string for the
domain of 1_C onto the domain of f:
-0.4
< g r a p h i c s >
and similarly for the B and D strings in the domain of g.
Then we pull the codomain of f and 1_C onto the codomain of g.
Another graphical calculus for bimonoidal categories
is given by tape diagrams <cit.>.
Also, there is the ZXW calculus which is a two dimensional string diagram
calculus that combines a path calculus (for direct sums) and the
monoidal tensor product structure of vector spaces <cit.>.
§.§ Where are we?
String diagrams for monoidal categories
are well understood <cit.>, and these readily
generalize (via folklore <cit.>) to string
diagrams for bicategories.
Our graphical calculus fits into the
framework of the surface diagram calculus for symmetric monoidal
bicategories, see <cit.> and discussion in <cit.> 4.1.
What we have is an additive monoidal bicategory
()^⊞, where the monoidal product ⊞
is the direct sum of matrices.
There is also a multiplicative monoidal bicategory
()^⊠ which uses a categorified Kronecker product ⊠ of matrices,
see <cit.> 8.12.
The surface diagram calculus for ()^⊠ is used in the context of
2-Hilbert spaces, see for example <cit.> 8.2, and<cit.>.
Putting these together we expect to have a
bimonoidal bicategory ^⊞,⊠.
We began this work with the question: what is an amplitude?
This kind of what is question
– what is a group, what is a space, etc. –
is answered by functorially representing into a semantic category.
For us, the semantic category is a category of modules;
we have been secretly dealing with the free bimodule
bicategory over the category , which directly categorifies
linear algebra over the field of complex numbers (amplitudes).
By taking microcosms we recover as the Hom category (1,1)
of the free bimodule bicategory .
§ THE BICATEGORY OF MATRICES OVER A BIMONOIDAL CATEGORY
In this section we start over from the beginning, with a more
rigorous abstract exposition. The notation is somewhat different,
although consistent with the above.
To be consistent with matrix notation, and
the horizontal direction of the graphical calculus,
the Hom categories are defined right-to-left:
(m,n) consists of 1-cells m← n.
(<cit.> 8.1)
Given a category
and natural numbers m,n=0,1,2..., we define the
matrix category _m,n():
Objects are m× n matrices A = [A_ij]
with entries A_ij∈_0.
We render such an object as
-0.4
< g r a p h i c s >
Morphisms f:A→ B
are matrices [f_ij:A_ij→ B_ij] of
morphisms of .
These are rendered as
-0.4
< g r a p h i c s >
Composition of morphisms f:A→ B and g:B→ C is
defined by componentwise composition in ,
(g∘ f)_ij := g_ij∘ f_ij.
This composition is rendered vertically as
-0.4
< g r a p h i c s >
The identity morphisms, 1_A:A→ A are defined as
the componentwise identities from ,
(1_A)_ij := 1_A_ij.
(<cit.> 8)
Given a bimonoidal category ,
we define the bicategory ((),α,λ,ρ)
as follows:
The objects, or 0-cells, are natural numbers n=0,1,2...
These are rendered as shaded regions:
-0.4
< g r a p h i c s >
For objects m and n of (),
the hom categories are
defined as (m,n) := _m,n().
Horizontal composition is a functor
(l,m) ×(m,n) →(l,n),
given by matrix multiplication.
For A∈(l,m), B∈(m,n),
this is defined using the bimonoidal structure of :
(AB)_ik := ⊕_j A_ij B_jk.
We render this horizontal composite as
-0.4
< g r a p h i c s >
Given object m, the unit 1-cell 1_m ∈(m,m) is
(1_m)_ij =
1^⊗ if i=j,
0^⊕ otherwise.
This is rendered as:
-0.4
< g r a p h i c s >
For objects m,n, the left unitor λ_m,n has component
at 1-cells A∈(m,n) the 2-cell λ_m,n(A): 1_m A→ A.
This is built by composing the left nullitors,
additive unitors and left multiplicative unitor of .
Similarly for the right unitor ρ_m,n, which is
built by composing the right nullitors,
additive unitors and right multiplicative unitors of .
For objects l,m,n,o, the associator α_l,m,n,o
has components at A∈(l,m), B∈(m,n), C∈(n,o)
given by the 2-cell α_l,m,n,o(A,B,C):(AB)C→ A(BC).
The (i,p) entry of this matrix is a morphism in :
⊕_k (⊕_j A_ij B_jk) C_kp→⊕_j A_ij(⊕_k B_jk C_kp)
built by composing the left and right distributors,
additive associativity, additive symmetry, and multiplicative associators of .
Given a bimonoidal category ,
we define the matrix 0_m,n∈_m,n()
as the matrix [A_ij] with A_ij=O,
and we define the matrix 1_m∈_m,m()
as the matrix [A_ij] with A_ii=I, and A_ij=O for i j.
§.§ Biproducts in
We define a 2-functor
⊞: ()×() →()
as follows:
On 0-cells m,n we have m ⊞ n := m+n.
On 1-cells A:m← n and B:m'← n',
the block-diagonal matrix
A⊞ B := [ A 0; 0 B ].
On 2-cells f:A→ B and g:C→ D,
the block-diagonal matrix
f⊞ g := [ f 0; 0 g ].
Given a bimonoidal category ,
the bicategory () has all finite biproducts
and a zero object.
We claim that 0 is a zero object.
To see that 0 is an initial object,
we find the category (n,0)=_n,0() is
terminal on-the-nose. It has one
object which is the unique n× 0 matrix [ ]_n,0
and one identity morphism, also an empty matrix:
-0.4
< g r a p h i c s >
The horizontally opposite argument shows that 0 is also a terminal object.
Given objects m,n in , we show that
m⊞ n is the product of m and n.
This means we must show a natural equivalence of
categories (m⊞ n, l) ≅(m,l)×(n,l)
for all objects l.
But this follows immediately from the definition of the categories
by concatenation of matrices,
_m+n,l() ≅_m,l()×_n,l().
The horizontally opposite argument shows that m⊞ n
is the coproduct of m and n, and so m⊞ n is a biproduct.
By iterating this we obtain all finite biproducts.
§.§ Symmetric monoidal structure in
Given a bimonoidal category , then
^⊞ := ((), ⊞, )
is a symmetric monoidal bicategory.
By the previous Theorem <ref>,
is a bicategory with binary product ⊞
and terminal object 0. The result follows
by <cit.> Theorem 2.15.
The monoidal product is strict. On 0-cells l,m,n of ()
(l⊞ m)⊞ n = l⊞ (m⊞ n),
0 ⊞ l = l,
l⊞ 0 = l.
And similarly for the 1- and 2-cells of ().
The monoidal product of 0,1,2-cells is rendered by layering cells:
-0.4
< g r a p h i c s >
⊞-0.4
< g r a p h i c s >
=
-0.4
< g r a p h i c s >
-0.4
< g r a p h i c s >
⊞-0.4
< g r a p h i c s >
=
-0.4
< g r a p h i c s >
-0.4
< g r a p h i c s >
⊞-0.4
< g r a p h i c s >
=
-0.4
< g r a p h i c s >
The components of the symmetry
2-natural isomorphism, σ_m,n^⊞: n⊞ m← m⊞ n
are given by the (n+m)× (m+n) block matrix:
σ_m,n^⊞ := [ 0_n,m 1_n; 1_m 0_m,n ].
and rendered as the 1-cell:
-0.4
< g r a p h i c s >
The syllepsis 2-cell
ν_m,n^⊞:
σ_n⊞ m^⊞σ_m⊞ n^⊞→ 1_m⊞ n
is defined by composing unitors and nullitors of .
We render this as
-0.4
< g r a p h i c s >
.
§.§ The category of scalars
An important idea here is given by taking microcosms:
the 1-categorical level is found in toto within the
2-categorical structure, rather than as a truncation (quotient) thereof.
These microcosms will be Homs of the 2-category.
The terminology is in relation to the microcosm principle
<cit.>.
Given a bimonoidal category , the
hom category (1,1) in () has a bimonoidal structure
such that (1,1) is bimonoidally equivalent to .
We outline this as follows:
The category
(1,1) = _1,1()
is isomorphic to after removing the matrix brackets.
The horizontal composition
(1,1)×(1,1)→(1,1)
gives
(1,1) a monoidal structure, which is
identical to the monoidal structure
(⊗, I, α^⊗, λ^⊗, ρ^⊗) of .
We define the additive monoidal structure
⊕:(1,1)×(1,1)→(1,1)
using the 1-cells given by the diagonal
[ I; I ]
and codiagonal [ I I ].
Similarly, the symmetry σ^⊞ gives a
symmetry σ^⊕ by using the diagonal and codiagonal 1-cells.
§.§ The dagger
(<cit.>)
A dagger category (, †) is a
category together with a
functor †:→^op
that is involutive and identity on objects:
f^††=f and A^†=A for morphisms f∈_1
and objects A∈_0.
In a dagger category (,†), a morphism f∈_1
is unitary when f^-1 = f^†.
A dagger monoidal category is
a monoidal category
(, , I, α^, λ^, ρ^)
and a dagger category (,†),
such that (f g)^† = f^† g^†
for all morphisms f,g∈_1, and
the natural isomorphisms
α^, λ^, ρ^
have unitary components.
Similarly, a dagger symmetric monoidal category,
has symmetry σ^ with unitary components.
A dagger bimonoidal category is
a bimonoidal category whose additive symmetric monoidal structure
is dagger symmetric monoidal
and whose multiplicative monoidal structure is dagger monoidal.
A dagger bicategory (,α,λ,ρ,†)
is a bicategory (,α,λ,ρ) such that:
The categories (m,n) are dagger categories:
for objects m,n∈ we have a
functor
†_m,n:(m,n)→(m,n)^op
that is the identity on objects
(1-cells of ) and
†_m,n^op†_m,n is the identity functor.
The natural isomorphisms α,λ,ρ have
unitary components:
α_l,m,n,o^-1 = (α_l,m,n,o)^†_l,o, λ_l,m^-1 = (λ_l,m)^†_l,m, ρ_l,m^-1 = (ρ_l,m)^†_l,m,
for objects l,m,n,o of .
Given a dagger bimonoidal category ,
((), α,λ,ρ,†)
is a dagger bicategory, where we define the functors
†_m,n:_m,n() →_m,n()^op
by applying †:→^op componentwise.
Componentwise application of †:→^op
is easily seen to give a dagger structure †_m,n on
the categories (m,n).
From Definition <ref>,
we see the associator and left and right unitor are built by
composing the bimonoidal structure morphisms of .
These are all unitary and so the
associator and left and right unitor of the bicategory (C)
will have unitary components.
A dagger symmetric monoidal bicategory
(, ⊞, , σ^⊞, ν^⊞, †)
is a dagger bicategory (,α,λ,ρ,†)
and a symmetric monoidal bicategory
(, ⊞,, σ^⊞, ν^⊞)
such that
(f⊞ g)^† = f^†⊞ g^†, for 2-cells f,g∈_2
and the components of the syllepsis ν^⊞ are unitary.
Given a dagger bimonoidal category ,
((), ⊞, , σ^⊞, ν^⊞, †)
is a dagger symmetric monoidal bicategory.
The dagger †_m,n acts componentwise on _m,n()
and this commutes with ⊞.
The components of ν^⊞ are compositions of unitary morphisms
of .
Acknowledgements.
I do wish to acknowledge some very useful discussions
with colleagues and friends; I believe it is appropriate
to do this in final versions of papers.
alpha
|
http://arxiv.org/abs/2307.00347v1
|
20230701135314
|
Spatial-Temporal Enhanced Transformer Towards Multi-Frame 3D Object Detection
|
[
"Yifan Zhang",
"Zhiyu Zhu",
"Junhui Hou"
] |
cs.CV
|
[
"cs.CV"
] |
Shell et al.: Bare Demo of IEEEtran.cls for Computer Society Journals
The Detection Transformer (DETR) has revolutionized the design of CNN-based object detection systems, showcasing impressive performance. However, its potential in the domain of multi-frame 3D object detection remains largely unexplored. In this paper, we present STEMD, a novel end-to-end framework for multi-frame 3D object detection based on the DETR-like paradigm. STEMD treats multi-frame 3D object detection as a sequence-to-sequence task and effectively captures spatial-temporal dependencies at both the feature and query levels.
Specifically, to model the inter-object spatial interaction and complex temporal dependencies, we introduce the spatial-temporal graph attention network, which represents queries as nodes in a graph and enables effective modeling of object interactions within a social context.
To solve the problem of missing hard cases in the proposed output of the encoder in the current frame, we incorporate the output of the previous frame to initialize the query input of the decoder.
Moreover,
to mitigate the issue of redundant detection results, where the model generates numerous overlapping boxes from similar queries,
we consider an IoU regularization term in the loss function,
which can distinguish between queries matched with the ground-truth box and queries that are similar but unmatched during the refinement process, leading to reduced redundancy and more accurate detections.
Through extensive experiments, we demonstrate the effectiveness of our approach in handling challenging scenarios, while incurring only a minor additional computational overhead. Our framework will potentially bring insights to this field.
The code will be available at <https://github.com/Eaphan/STEMD>.
Multi-Frame 3D Object Detection, Transformer, Graph Attention Network, Point Cloud, Autonomous Driving.
Spatial-Temporal Enhanced Transformer Towards Multi-Frame 3D Object Detection
Yifan Zhang, Zhiyu Zhu,
Junhui Hou
Y. Zhang, Z. Zhu, and J. Hou are with the Department of Computer Science, City University of Hong Kong, Hong Kong. E-mail: [email protected]; [email protected]; [email protected];
August 1, 2023
====================================================================================================================================================================================================================================================================
§ INTRODUCTION
0
The past several years have witnessed an explosion of interests in 3D object detection due to its vital role in autonomous driving perception. Meanwhile, the LiDAR sensor is becoming an indispensable instrument in 3D perception for its capacity of providing accurate depth information in intricate scenarios such as dynamic traffic environments and adverse weather conditions. The majority of 3D object detectors are dedicated to detecting in single-frame LiDAR point clouds, i.e., predicting oriented 3D bounding boxes via frame-by-frame paradigm.
In general, point cloud videos are readily available in practical applications and can provide rich spatiotemporal coherence.
For example, XXX
Therefore, a single-frame 3D object detection suffers from deteriorated performance in comparison with multi-frame XXX.
However, xxx (limitation of single-frame 3D detection. )
LiDAR-based 3D object detection has been actively studied in recent years due to its wide application in the fields of autonomous driving and robotics.
With the adventure of large-scale datasets, a large number of deep learning-based single-frame 3D object detectors have been proposed. However, there are challenges difficult to resolve with a single sweep of point cloud. First, point clouds are sparse, especially at a long distance away from the LiDAR sensor. Second, incomplete point clouds due to partial observation, occlusion, and view truncation lead to ambiguities in the object geometry.
Leveraging multiple frames of point clouds, on the other hand, provides critical temporal cues that mitigate the above-mentioned challenges.
Three-dimensional (3D) object detection is one of the fundamental tasks in the computer vision community that aims to identify and localize the oriented 3D bounding boxes of objects in specific classes.
It plays a critical role in broad applications, including autonomous driving, object manipulation, and augmented reality.
Recent years have witnessed the emergence of a large number of deep learning-based single-frame 3D detectors<cit.> with the advent of large-scale datasets<cit.>.
Nonetheless, given the intricacies of traffic environments, including long distances and inter-object occlusion, the object information encapsulated within point clouds may be inevitably subject to distortions of potential sparsity or incompleteness.
Consequently, these aspects typically engender a sub-optimal performance of the single-frame 3D detectors <cit.>.
As the point cloud sequence intrinsically provides multiple views of objects, it implies promising approaches to extract vital spatiotemporal information for facilitating more accurate detection, especially for objects that pose significant detection challenges.
By incorporating complementary information from other frames, a multi-frame 3D object detector exhibits improved performance compared to a single-frame 3D object detector (see Fig.<ref>).
The existing works in multi-frame 3D object detection have explored some feasible solutions.
A straightforward one is concatenating the observed points from multiple frames and using an additional dimension to indicate the timestamp <cit.>. However, this method lacks explicit modeling of cross-frame relations and is less effective for fast-moving objects given multi-frame point cloud input <cit.>.
Some previous works naturally apply the long short-term memory (LSTM) network or gated recurrent unit (GRU) to voxel-level or BEV-level features for temporal modeling<cit.>.
3D-MAN <cit.> stores the features of box proposals in a memory bank and performs attention across proposal features from multiple perspectives.
Recently some high-performance methods <cit.> adopt a two-phase framework for multi-frame 3D object detection, where a baseline detector is employed to generate boxes and speed of objects in each frame, and the detected boxes across frames are associated to form a trajectory;
Then a specific region-based network is proposed to refine the box based on sequence object points and boxes.
Despite the gratifying success of these approaches, such two-phase pipelines for multi-frame object detection are somewhat sophisticated, requiring many extra hand-crafted components, e.g., IoU-based proposal matching, per-frame feature encoding, and trajectory-level feature propagation <cit.>.
Thus, there is a pressing need to construct a novel framework for multi-frame 3D object detection that not only yields accurate results but also operates in a fully end-to-end fashion.
The recent advancements in sequential modeling <cit.> and cross-modal fusion <cit.> reveal the remarkable capabilities of the Transformer architecture as a powerful framework for effectively modeling the information interaction within sequential or cross-modal data. The intrinsic self-attention module in Transformer plays a critical role in this success, as it enables the effective encoding of mutual relationships within the data.
Especially, DETR <cit.> employs a transformer-based architecture to directly predict the bounding boxes and categories of objects, which replaces traditional convolutional neural network (CNN)-based detectors <cit.> and achieves state-of-the-art performance <cit.>.
The Transformer architecture is well-suited for multi-frame 3D object detection. First, consecutive frames contain valuable temporal information about the motion and behavior of objects. Transformer excels at modeling sequential data by capturing long-range dependencies in both spatial and temporal dimensions, which is crucial for understanding object dynamics across multiple frames <cit.>.
Second, DETR-like detectors operate in a fully end-to-end manner, enabling direct optimization of the entire multi-frame 3D object detection pipeline. This end-to-end learning manner avoids separate modules or handcrafted components, such as extra feature encoding networks or post-processing steps.
Given the above factors, we propose STEMD, a novel end-to-end multi-frame 3D object detection framework based on the DETR-like paradigm in this paper.
And we enhance the model from three aspects specifically tailored for multi-frame 3D object detection, i.e., graph-based spatial-temporal modeling, improved query initialization, and an effective regularization term, as detailed subsequently.
1) It has been widely acknowledged that scene understanding tasks, such as pedestrian trajectory prediction <cit.>, heavily rely on the effective modeling of spatial-temporal relationships between individual objects and their surroundings.
For example, in crowded environments, pedestrians exhibit diverse interaction patterns, such as avoiding collisions, following groups, or adapting to dynamic obstacles.
However, in the context of multi-frame 3D object detection, we argue that the self-attention mechanism employed in the decoder of DETR fails to fully exploit the relations among queries. This limitation arises due to the dense application of self-attention across all queries.
To address this issue, we propose a graph-based attention network that leverages the complex spatial dependencies among objects. In our approach, we represent single objects (queries) as nodes and model their interactions as edges in a graph structure (as shown in Fig. <ref>). By allowing nodes to dynamically attend to neighboring nodes in a context-aware manner, our graph-based attention network captures intricate interactions. This attention mechanism facilitates the learning of various social behaviors, enabling our model to adapt to complex scenarios and make accurate predictions. Moreover, we introduce the graph-to-graph attention network to effectively model temporal dependencies.
This enables our model to make current temporal predictions that satisfy the sequential consistency of object trajectories by attending to relevant past position information.
2) In existing DETR-like models <cit.>, the encoder generates a set of region proposals, and the decoder progressively refines the bounding box predictions based on those queries initialized with these proposals.
However, it is challenging to obtain accurate results for these cases through layer-by-layer refinement in the decoder if the initial queries inferred from the encoder miss some corner cases.
Since these cases may be more easily detectable in preceding frames, we propose to initialize additional input queries for the decoder in the present frame using selected final predicted boxes from the preceding frame.
Consequently, this initialization yields a higher recall rate of queries with respect to ground truth boxes and provides a strong starting point for refining the bounding box predictions.
This strategy, namely Temporal Query Recollection (TQR), helps in handling object occlusions, sudden disappearance, and other challenging scenarios commonly encountered in point cloud sequences.
Besides, initializing the queries with the final predicted boxes from the previous frame promotes temporal consistency in graph-based learning.
3)
Existing DETR-based detectors <cit.> utilize the one-to-one Hungarian Matching method, which assigns each ground truth box to the query with the highest IoU and treats other highly similar queries as negative samples.
But the current loss supervision for individual queries lacks consideration for surrounding queries, which poses difficulties in distinguishing between the best match and other highly similar queries. Similar queries are insufficiently suppressed and turn into redundant prediction boxes.
Furthermore, the presence of duplicate query boxes without Non-Maximum Suppression (NMS) can negatively impact detection accuracy.
To tackle these issues, we introduce an IoU regularization term that penalizes query boxes in close proximity to one another. This regularization encourages unmatched queries to differentiate their predicted bounding boxes from the best match, resulting in distinct refinements and less redundant prediction boxes.
We conduct extensive experiments on the prevailing Waymo dataset <cit.>.
Our novel approach surpasses the performance of previous state-of-the-art single-frame and multi-frame 3D object detectors by a significant margin, while incurring only a tiny additional computation cost.
To summarize,
the principal contributions of this research can be encapsulated as follows:
*
we present a novel DETR-like framework for multi-frame 3D object detection, namely STEMD, which captures the spatial-temporal dependencies in the sequence at both feature and query levels;
*
we propose to effectively model the socially-aware inter-object spatial interaction and complex temporal dependencies with a spatial-temporal graph attention network (STGA-Net), which represents queries as nodes in a graph;
* we propose TQR, a simple yet effective training strategy that enhances the initial query input of the decoder in the current frame using final predicted boxes from the previous frame; and
* we introduce an IoU regularization term to penalize query boxes with large overlaps, encouraging similar queries to output fewer redundant boxes as the ground-truth boxes in BEV generally do not overlap.
The remainder of the paper is organized as follows.
Sec. <ref> reviews existing works most related to this work.
In Sec. <ref>, we introduce the overall architecture of STEMD and elaborate on its key components.
In Sec. <ref>, we validate the effectiveness of our proposed method on the Waymo dataset and conduct ablation studies to analyze the effect of different components.
Finally, Sec. <ref> concludes this paper.
§ RELATED WORK
In this section, we mainly review existing works on single-frame 3D object detection, multi-frame 3D object detection, graph structure learning, and vision transformer, which are closely aligned with the core objectives of our work.
Single-frame 3D Object Detection.
Early research on single-frame 3D object detection can be classified into voxel-based and point-based approaches. Typically, voxel-based 3D detectors turn point clouds into grid-structure forms with fixed sizes and employ sparse convolution networks for feature extraction <cit.>. PointPillar <cit.> deployed a novel encoder that learns features on pillars (vertical columns) of the point clouds, where only 2D convolutional layers are used in the network.
Point-based 3D detectors <cit.> consume the raw 3D point clouds directly and extract highly semantic features through a series of downsampling and set abstraction layers following PointNet++ <cit.>. To preserve foreground points as much as possible, <cit.> and <cit.> optimized the downsampling strategies with semantic information.
There are also approaches that leverage a hybrid representation by integrating the multi-scale voxel-based features and point-based features containing accurate location information, and thus achieve a balance between detection accuracy and efficiency <cit.>. PDV <cit.> efficiently localizes voxel features with voxel point centroids, which are then aggregated through a density-aware RoI grid pooling module using kernel density estimation and self-attention with point density positional encoding. Besides, some works convert point clouds into range view representations and process them with more efficient 2D CNN <cit.>.
Multi-frame 3D Object Detection.
It has been proven in existing state-of-the-art works, given short point cloud sequence input, the simple concatenation of multi-frame point clouds can significantly outperform the single-frame detection <cit.>. However, this strategy lacks explicit modeling of cross-frame relations and is less effective for fast-moving objects given a long point cloud sequence <cit.>.
Naturally, some early works applied LSTM or GRU to voxel-level or BEV-level feature maps across different frames for temporal modeling<cit.>.
Recently some high-performance methods <cit.> adopt a two-phase framework for multi-frame 3D object detection, where a baseline detector is employed to generate boxes and speed of objects in each frame, and the detected boxes across frames are associated to form a trajectory;
Then a specific region-based network is proposed to refine the box based on sequence object points and boxes.
Despite the gratifying success of these approaches, such two-phase pipelines for multi-frame object detection are somewhat sophisticated, requiring many extra hand-crafted components, e.g., IoU-based proposal matching, per-frame feature encoding, and trajectory-level feature propagation <cit.>.
Graph Structure Learning.
Graph neural networks (GNNs) have emerged as powerful tools for learning intrinsic representations of nodes and edges in graph-structured data <cit.>. GNNs excel at capturing rich relationships among nodes and enabling comprehensive analysis of graph data. Graph convolution networks (GCNs) have revolutionized graph analysis by extending the conventional convolution operator from regular domains to handle arbitrary and unordered graph-structured data <cit.>. GCNs encompass spatial-based methods <cit.> and spectral-based approaches <cit.>, employing a recursive message-passing scheme to project graph data into a continuous and low-dimensional space <cit.>. Building on this success, Velickovic et al. <cit.> introduced graph attention networks, which incorporate attention mechanisms to selectively attend to neighboring nodes during message passing. GATs have demonstrated improved performance in tasks such as node classification, link prediction, and graph classification <cit.>. GNNs have also been proven effective in various computer vision tasks. For example, Xu et al. <cit.> proposed SGRN, which learns a spatial-aware sparse graph to leverage semantic and spatial layout relationships in object detection. Yu et al. <cit.> presented a spatiotemporal graph transformer framework for modeling social interactions and complex temporal dependencies in pedestrian trajectory prediction. Additionally, GCNs have achieved remarkable performance in action recognition by modeling human body skeletons as spatiotemporal graphs <cit.>. Inspired by these works, we propose to use a spatial-temporal graph attention network to mine the relationships between queries in the decoder at the current frame and previous frames. We leverage learned local interactions and temporal information to comprehensively understand the scene and make precise predictions, even in challenging scenarios.
Vision Transformer.
Transformer-based models have gained significant popularity in recent years across various deep learning tasks. Initially utilized in Natural Language Processing (NLP) <cit.>, these models have witnessed success in computer vision tasks as well <cit.>.
The intrinsic self-attention module in Transformer plays a critical role in this success, as it enables the effective encoding of mutual relationships within the data.
Especially, DETR presented a novel paradigm shift by formulating object detection as a direct set prediction problem <cit.>.
The core idea behind DETR lies in its ability to leverage the self-attention mechanism for capturing global context and modeling the relationships between objects in an image. Subsequently, Deformable DETR <cit.> enhanced DETR by incorporating a deformable attention module that focuses on a small set of salient key elements in the feature map.
Recently, some methods have also applied the transformer to 3D object detection tasks <cit.>.
The ability of transformers to capture long-range dependencies and rich temporal information has made them a natural fit for multi-frame 3D object detection. By considering the motion and behavior of objects across frames, transformers can provide valuable insights into the dynamics of 3D objects.
§ PROPOSED METHOD
§.§ Overview
As illustrated in Figure <ref>, architecturally, our framework follows the DETR-like paradigm and comprises several integral components, including a CNN-based backbone, an encoder, a decoder, and multiple FFN (Feed-Forward Network) prediction heads.
Furthermore, our framework incorporates convolutional GRU (ConvGRU) and spatial-temporal graph attention network, which are adopted to model feature-level and query-level spatial-temporal dependencies, respectively.
Specifically, our proposed model takes a long-term point cloud sequence of T frames as input, denoted as {I_1,..., I_T}.
To eliminate the influence of ego-motion, we leverage the LiDAR pose information to align the point cloud of previous frames.
Then, point cloud I_t in each time step t is voxelized and forwarded to a sparse 3D convolution network to obtain the BEV features X_t. And we further perform local self-attention with BoxAttention <cit.> in the encoder module, followed by a ConvGRU block to model spatial-temporal dependencies and obtain enhanced encoder features H_t.
After that, a class-agnostic FFN is applied to generate object proposals based on the enhanced encoder features H_t, and the top N_p scored proposals
are selected to initialize the queries for the decoder.
In the decoder layers, we perform self-attention between queries and cross-attention between queries and the enhanced encoder features H_t <cit.>. Here, the output of each decoder layer is then passed through a detection head to conduct iterative box refinement.
Particularly, we propose a spatial-temporal graph attention network to capture both the spatial relationship between queries in each frame and temporal dependencies between queries of adjacent frames. The enhanced graph embeddings are further passed to another detection head to obtain final predictions.
Besides,
to reduce the impact of the encoder missing hard cases in its proposal predictions, we select top N_res scored prediction boxes of last frame to supplement these proposals as extra query input of decoder in current frame (see details in Sec. <ref>).
In what follows, we will introduce the key components in detail.
§.§ ConvGRU-based Feature Enhancement for Encoder
The BEV features extracted by the CNN backbone, along with corresponding positional embeddings, are sent to the encoder. This allows for the capture of local structures and contextual information in the BEV representation using a local self-attention mechanism, resulting in the generation of X^E_t. In addition to the standard encoder, we incorporate ConvGRU to capture both spatial and temporal information from sequential 2D BEV feature maps {X^E_t}_t=1^T, resulting in enhanced encoder features denoted as H_t.
The ConvGRU model consists of two main components: the convolutional component and the GRU component <cit.>. The convolutional component is able to extract spatial features from each input feature map, while the GRU component is responsible for modeling the temporal dependencies between the feature maps at different time steps. The output H_t of ConvGRU at each time step t is obtained by:
1!R_t = σ((H_t-1,U_r) +(X_t,W_r)+b_r),
Z_t = σ((H_t-1,U_z) +(X_t,W_z)+b_z),
H̃_t = tanh((R_t ⊙ H_t-1,U_h) +(Y_t,W_h)+b_h),
H_t = Z_t ⊙ H_t-1 +(1-Z_t)⊙H̃_t,
where (·, ·) denotes the convolution operation, X_t is the input feature map at time t, σ(·) represents the sigmoid activation function, ⊙ denotes element-wise multiplication, R_t and Z_t are the reset gate and update gate vectors, respectively, H̃_t is the candidate hidden state at time t.
In these equations, H_t-1 is the hidden state at time t-1, U_r, W_r, U_z, W_z, U_h, W_h represent the GRU weights, and b_r, b_z, b_h are the biases.
The enhanced feature H_t is then utilized to generate initial object proposals ℬ^E_t through a feed-forward network (FFN) head.
With the N_q object queries 𝒬_t initialized with boxes including selected top N_p scored box proposals, we perform self-attention between queries 𝒬_t and cross-attention between query 𝒬_t and the enhanced encoder features H_t to update the queries layer-by-layer <cit.>.
We implement the encoder and decoder following existing work <cit.>, and describe them in this section making this paper self-contained.
§.§ Spatial-Temporal Graph Attention Network
Modeling spatial-temporal relationships between individual objects and their surroundings helps to understand complex scenarios and social interaction, thus yielding accurate detection results in current frame.
As illustrated in Fig <ref>, we first introduce how to eliminate redundant bounding boxes output from the upstream decoder and derive representative queries for constructing graphs.
Next, we use the graph attention network to capture both spatial relation between queries in a single frame and the temporal dependencies between queries of adjacent frames.
§.§.§ Graph Node Selection
Redundant or overlapping bounding boxes predicted by the decoder can hinder the learning of graph structure topology and deteriorate the overall performance. To solve this problem, we propose an efficient method in this section, outlined in Algorithm <ref>, to eliminate redundant bounding boxes and retain the filtered results as nodes in downstream graph attention modules.
By doing so, we ensure that the selected bounding boxes effectively cover distinctive and representative regions of interest in the input data.
This approach enhances the performance of graph-based learning and facilitates a more comprehensive understanding of the underlying relationships and structures within the scene.
Let ℬ^D_t={b^D_t,i}_i=1^N_q, 𝒮^D_t={s^D_t,i}_i=1^N_q, and 𝒬^D_t={q^D_t,i}_i=1^N_q denote the predicted boxes, confidence scores, and query embeddings of the decoder at time t, respectively.
First, given a bounding box b^D_t,i, we define his neighboring sets 𝒩_b^D_t,i, where (b^D_t,i, b^D_t,j)>θ for each j∈𝒩_b^D_t,i.
These neighboring sets contain boxes that overlap significantly with the given box. Second, we select the neighboring box b^D_t,j with the highest confidence score, denoted as b^D_t,m where m is the index of the neighboring box with the highest confidence score.
Next, if the neighbors are not empty we update the confidence of b^D_t,i as follows:
s̃^D_t,i =
s^D_t,i, if s^D_t,i≥ s^D_t,m
s^D_t,i× (1-(b^D_t,i, b^D_t,m)), if s^D_t,i < s^D_t,m
, where we suppress the scores of boxes that have more confident boxes around them <cit.>.
Finally, with the confidence scores of all boxes updated, we select bounding boxes with top N_g scores.
The filtered bounding boxes ℬ̂^D_t={b^D_t,i}_i=1^N_g and the corresponding query embedding 𝒬̂^D_t={q^D_t,i}_i=1^N_g are sent to the subsequent spatial self-attention module and temporal cross-attention module for further processing.
Discussion.
Unlike the NMS-series methods that rely on confidence ranking and are challenging to parallelize, our proposed method is highly parallelizable. Since each candidate box is only influenced by its neighboring bounding boxes, we can create N_q threads to process them concurrently. This parallel processing significantly improves the efficiency of our algorithm.
Furthermore, our approach differs from traditional NMS-series methods in terms of the objective.
The NMS post-processing improves the accuracy and efficiency of detectors by selecting the single best box and eliminating other duplicate bounding boxes, while our method aims to suppress redundant boxes and select representative ones from the candidate boxes as nodes in the graph and excessive inhibition may result in loss of essential node information.
§.§.§ Spatial Self-Attention Module
Given the selected output of the decoder for each time, we construct a spatial graph where each node corresponds to a query embedding. We define the spatial graph as G_t^s={V_t^s,E_t^s}, where V_t^s is the set of nodes and E_t^s is the set of edges.
Each node v^s_t,i∈ V_t^s corresponds to output query embedding q^D_t,i of decoder.
For each pair of nodes (v^s_t,i, v^s_t,j) in V_t^s, we calculate the distance between the centers of the corresponding query boxes b^D_t,i and b^D_t,j.
If the distance between the two bounding boxes is below the defined threshold d_s, we add an edge between them to E_t^s. The edge indicates a spatial relationship or proximity between the corresponding query embeddings.
We use the graph attention network to capture spatial dependencies among the query embeddings in the spatial graph <cit.>.
For a node v^s_t,i, we calculate the similarity coefficient between its neighbors
and itself:
e_ij = f([W_1^sv^s_t,i || W_1^sv^s_t,j]),
where W_1^s is a learnable weight matrix, || denotes concatenation f is a single-layer feedforward neural network that maps the concatenated high-dimensional features to a real number, and 𝒩_v^s_t,i is the set of neighboring nodes of v^s_t,i in the spatial graph.
Then we can further obtain the attention coefficients between each pair of nodes:
α_ij = ((e_ij))/∑_k'∈𝒩_v^s_t,i^((e_ik')),
where (·) is a leaky rectified linear activation function.
The attention coefficients are used to compute a weighted sum of the feature vectors of neighboring nodes. Finally, the output of the spatial transformer is the sum of the original query embeddings v^s_t,i and the computed feature vectors:
ṽ^s_t,i = v^s_t,i + σ(∑_j ∈𝒩_v^s_t,iα_ijW_2^sv^s_t,j),
where W_2^s is a learnable weight matrix, σ is a non-linear function.
Overall, the spatial self-attention modules model the interaction between the nodes, i.e., the query embedding in the same timestamp.
§.§.§ Temporal Cross-Attention Module
To capture temporal dependencies among the output of the decoder for multiple timestamps, we utilize the graph-to-graph attention mechanism.
Specifically, we use a GAT to model the temporal relationship between the source graph G^u_t-1 at (t-1)-th frame and the target graph G^u_t at t-th frame.
Let V^u_t={v^u_t,i}_i=1^N_g be the set of nodes in the target graph, where each node v^u_t,i corresponds to the query embedding q^D_t,i. Similarly, we use V^u_t-1={v^u_t-1,i}_i=1^N_g to denote the set of nodes in the source graph.
The graph-to-graph cross-attention modules perform message passing from source graph to target graph. For the node v^u_t,i, we obtain its output by:
ṽ^u_t,i = v^u_t,i + σ(∑_j ∈𝒩_v^u_t,iβ_ijW_1^uv^u_t-1,j),
where W^u_1 is the learnable transformation weight matrix for the source graph, 𝒩_v^u_t,i is the neighboring nodes of v^u_t,i in the source graph.
And β_ij is the attention coefficient between i-th node of the source graph and j-th node of the target graph computed as:
β_ij = (( f([W_tgt^uv^u_t,i || W_src^uv^u_t-1,j]) )) /∑_k'∈𝒩_v^u_t,i^(( f([W_tgt^uv^u_t,i || W_src^uv^u_t-1,k']) )) ,
where W_src^u and W_tgt^u are the learnable transformation weight matrices for source and target graphs respectively.
Finally, we obtain the graph embedding by taking the sum of the output of the spatial self-attention module and temporal cross-attention module.
This informative graph embedding is further passed to another prediction head, which we call the graph head, to achieve refined bounding boxes.
In such wise, we are able to capture both spatial and temporal dependencies among the output of the decoder for multi frames, which helps to improve the accuracy of 3D object detection.
§.§ Temporal Query Recollection
Motivation. Current models like DETR <cit.> generate region proposals in the encoder, and the decoder progressively refines the bounding box predictions based on these queries initialized with the proposals. However, if the initial queries inferred from the encoder miss some corner cases, it is challenging to obtain accurate results for these cases through layer-by-layer refinement in the decoder. Although we enhance the BEV features learned by the encoder with a ConvGRU block, feature-level enhancement alone is not sufficient.
To address this limitation, we propose a training strategy called temporal query recollection to enhance the initial queries of the decoder. Specifically, we incorporate the final predictions from the last frame as an additional query input for the decoder in the current frame. The reason behind employing this strategy is that challenging cases that were missed by the encoder at the current frame might be comparatively easier to detect in the preceding frame.
Let ℬ^G_t-1={b^G_t-1,i}_i=1^N_g and 𝒮^G_t-1={s^G_t-1,i}_i=1^N_g denote the prediction boxes and scores from the graph head at last timestamp t-1.
When initializing the queries input for decoder, we select top N_res scored boxes ℬ̂^G_t-1 as the supplementary of prediction of encoder ℬ^E_t={b^E_t,i}_i=1^N_p.
The total number of initial query embeddings N_q for the decoder is N_q=N_p + N_res.
The initial query embeddings 𝒬^0_t for the decoder are derived as follows:
𝒬^0_t={q^0_t,i = PE(b^0_t,i, s^0_t,i), i=1,...,N_q},
where each initial query embedding q^0_t,i is encoded from the bounding box b^0_t,i and confidence s^0_t,i through the position encoding (PE).
Consequently, this initialization strategy improves the recall rate of initial queries with respect to the ground-truth boxes and provides a strong starting point for refining the bounding box predictions in the current frame. Additionally, initializing the queries with extra predicted boxes from the previous frame promotes temporal consistency in graph-based learning. The TQR strategy helps handle challenging scenarios such as object occlusions, sudden disappearances, and other common difficulties encountered in 3D object detection.
§.§ IoU Regularization Term
The current DETR-like detectors employ a one-to-one Hungarian Matching method, where only the query with the highest IoU is matched to each ground truth box. Other queries that are close to the ground truth box but have smaller overlaps are considered negative samples.
Thus, it is necessary to take the surrounding similar queries into consideration when determining if the query is the best match and whether suppression is required.
However, the current detection loss supervision for each query does not take into account the surrounding queries.
It poses a challenge for the network to distinguish between the matched query and other highly similar queries that are not the best match. Similar queries are insufficiently suppressed and turn into redundant prediction boxes.
Furthermore, the presence of duplicate query boxes without Non-Maximum Suppression (NMS) can degrade detection accuracy.
To address these issues, we propose the inclusion of an IoU regularization term to encourage diversity among the duplicate queries and discriminate similar object queries in local regions. This regularization term penalizes query boxes that are in close proximity to each other. By doing so, it encourages the unmatched queries to differentiate their predicted bounding boxes from the matched query during the refinement in decoder, even if they are highly similar. It is alos worth noting that in 3D object detection tasks, unlike 2D detection, the bounding boxes in the Bird's Eye View generally do not overlap (See Fig. <ref>). Therefore, the IoU regularization term does not affect the relationship between the matched queries as they do not overlap.
Specifically, we consider a specific FFN prediction head in our model that outputs a set of query boxes denoted as ℬ={b_i}_i=1^N'. To promote diversity among the query boxes, we introduce a regularization loss term, denoted as ℛ_b, which is added to the overall loss function during model optimization. This term is computed by summing the IoU between each pair of bounding boxes, weighted by their corresponding confidence scores:
ℛ_b = ∑_i=1^N'∑_j=1,j≠ i^N' s_i ×(b_i, b_j),
where s_i is the corresponding confidence score of bounding box b_i.
By incorporating this penalization, the model is encouraged to generate predictions with lower degrees of overlap, leading to increased diversity among the query boxes and improved distinguishability.
The introduction of a confidence score aims to place more emphasis on bounding boxes that are close to the ground truth object with a high confidence score but are not the best match. These boxes are pushed further away from the best match, while boxes with lower confidence scores are considered less important and assigned lower weights.
§.§ Loss Function and Inference
Loss function. During training, we leverage the Hungarian algorithm to assign ground truths to the object queries in all FFN prediction heads in the encoder, decoder, and graph head.
Following <cit.>, we define the loss of each FFN prediction head as:
L = λ_cls L_cls + λ_h L_Huber + λ_giou L_GIoU + λ_rℛ_b,
where we adopt the focal loss L_cls for classification, the Huber loss L_Huber and 3D GIoU loss L_GIoU are utilized for box regression,
λ_cls, λ_h, λ_giou, λ_r are hyper-parameters to balance the penalty terms.
Inference.
For online multi-frame 3D object detection, we do not repeat all steps
for the previous frames when we perform detection at current frame.
Instead, we preserve useful variables during the detection of the last frame t-1.
Specifically, when conducting detection at current time t, the hidden state H_t-1 is used in ConvGRU modules to perform feature-level temporal enhancement, and the query embeddings inferred by the decoder in last frame t-1 are preserved to serve as nodes V^u_t-1 in the source graph G^u_t-1 to perform spatial-temporal message passing to target graph G^u_t.
Therefore, the inference speed is not much slower than the single-frame detector.
§ EXPERIMENTS
In this section, we describe the datasets and evaluation metrics in Sec. <ref>, and introduce the implementation details in Sec. <ref>. Then we compare our method with previous state-of-the-art methods in Sec. <ref>. Afterward, thorough ablative studies are conducted to investigate the effectiveness of essential components of our framework in Sec. <ref>. Finally, we also provide the run-time efficiency analysis for online 3D object detection scenarios in Sec <ref>.
§.§ Dataset and Evaluation Metric
The Waymo Open Dataset <cit.> is a comprehensive and diverse dataset specifically designed for 3D object detection in autonomous driving scenarios. It consists of a large collection of 798 sequences for training, 202 sequences for validation, and 150 sequences for testing, resulting in a total of 198,438 LiDAR frames. Each sequence provides a rich set of data, including LiDAR point clouds, multi-view camera images, and object annotations spanning a full 360-degree field of view.
Each sequence contains approximately 200 frames spanning 20 seconds. The Waymo dataset employs a 64-beam LiDAR with a capture frequency of 10Hz, resulting in around 180,000 points per frame.
To facilitate the evaluation of 3D object detectors, the objects in the Waymo Open Dataset are categorized into two difficulty levels: LEVEL_L1, which represents objects with more than five observed LiDAR points, and LEVEL_L2, which includes objects with 1-5 points. The performance of 3D object detectors is commonly assessed using the mean Average Precision (mAP) and mean Average Precision weighted by heading accuracy (mAPH) metrics.
For our experiments, we assess the detection results of objects in both LEVEL_L1 and LEVEL_L2 difficulty levels.
We leverage the whole training set for supervision, and the evaluation is performed on the complete validation set, consisting of around 40,000 frames, using the official evaluation tool provided by Waymo.
By leveraging the richness and diversity of the Waymo Open Dataset, we aim to evaluate the performance of our approach for 3D object detection in autonomous driving scenarios, demonstrating its effectiveness in handling the challenges posed by real-world environments.
§.§ Implementation Details
Network architectures.
In this part, we mainly elaborate the architectural details of the proposed model.
First, we use average pooling to obtain the voxel-wise feature map by encoding the point cloud in each voxel.
Then, our approach utilizes a ResNet18 <cit.> 3D backbone, which replaces the 2D convolution modules with 3D counterparts.
We utilize the submanifold sparse convolution layers in all residual blocks except for the down-sampling layers, to reduce computational costs.
To enhance the bird's eye view features, we employ an FPN structure <cit.>. Then the 8x down-sampled BEV feature maps are then passed to a 3-layer encoder, where we incorporate BoxAttention <cit.>, a variant of Deformable Attention <cit.>, for self-attention operations.
In the ConvGRU module after the encoder, we use a learnable kernel with a size of 3x3 for all convolution layers.
The output of the ConvGRU module maintains the same channel configuration as the input BEV features. Leveraging the enhanced BEV features from the ConvGRU, we employ a class-agnostic FFN head to generate object proposals <cit.>.
In the decoder, we perform standard multi-head attention for self-attention and BoxAttention for cross-attention between queries and BEV feature maps. We set the hidden size for encoder and decoder to 256, and the number of attention heads to 8.
During training, we adopt a strategy called temporal query recollection, as described in Sec. <ref>. This strategy involves selecting the top 300 highest-scoring final predicted results from the previous frame to initialize the object queries of decoder. These queries act as a supplement to the top 1000 scored proposals generated by encoder in the current frame.
In the graph node selection, we set the number of selected bounding boxes N_g to 800 and IoU threshold θ to 0.5.
When it comes to the spatial-temporal graph attention network, we adopt the distance threshold d_s=2m and d_u=2m when determining the neighborhood of the node in the spatial self-attention module and temporal graph-to-graph attention network, respectively.
In each FFN prediction head, we apply two separate 3-layer MLPs as prediction branches for classification and regression, respectively.
For the loss function, the values for λ_cls, λ_h, λ_giou is set to 1, 4, 2, respectively. And the value of λ_r is set to 1 for the prediction head of decoder and the graph head, 0 for the proposal head in encoder.
Training details.
For the input of model, we consider the points whose coordinates locate within [-75.2m, 75.2m], [-75.2m, 75.2m], and [-2m, 4m] along the x, y, and z axis respectively.
And the point clouds are voxelized with (0.1m, 0.1m, 0.15m) grid size.
The max number of non-empty voxels is set to 15000 during training and inference.
We adopted the Adam optimizer with β_1=0.9 and β_2=0.99. We initialized the learning rate as 0.003 and updated it with the one-cycle scheduler <cit.> and weight decay of 0.01.
And we trained the model for a total of 12 epochs in an end-to-end manner. The implementation is based on the PyTorch framework.
We set the score threshold to 0.1 to filter low-quality predictions during inference.
§.§ Comparison with State-of-the-Art Methods
0
We present a comprehensive comparison of our STEMD with a wide range state-of-the-art 3D object detectors on the validation set of Waymo dataset.
As shown in Table <ref>, multi-frame detectors are overall better than single-frame detectors.
Under the same setting of 8-frame sequence, our method outperforms the CenterFormer by 2.2% APH and 0.6% APH on LEVEL_1 and LEVEL_2.
Specifically, STEMD with 8-frames input achieves the mAPH of XX% and XX% on LEVEL_1 and LEVEL_2 respectively, achieving the new state-of-the-art. It is worth noting that our method is end-to-end, which differs from previous two-phase method (to cite X).
Compared with STEMD, STEMD obtains significant improvements in STEMD objects, (i.e., pedestrian and cyclist), which demonstrates the effectiveness of our STEMD for capturing the detailed STEMD information.
Moreover, STEMD reaches STEMD, STEMD of LEVEL_2 mAPH on 2,4 frame setting, which outperforms the previous best multi-frame methods, the CenterFormer, by +X and +x, respectively.
The gains are more/also remarkable on LEVEL_1 mAPH, i.e., +> and +?.
Note our method is designed specifically for end-to-end multi-frame detection, which differs from some two-phase methods like XXX.
In our study, we conduct a comparison between STEMD, our proposed method, and the current state-of-the-art methods for 3D object detection. It can be observed in Table <ref> that approaches employing multiple frames generally outperformed those using a single frame as input.
When comparing our method to our previous state-of-the-art single-frame method, PV-RCNN++, our proposed STEMD achieved a significant improvement in the overall 3D mAPH by utilizing an 8-frame point cloud sequence as input. Specifically, we observed remarkable enhancements in the detection performance for vehicles, pedestrians, and cyclists, with respective improvements of 1.5% mAPH, 6.1% mAPH, and 6.9% mAPH on LEVEL_2.
Comparing our approach to previous single-frame detectors, the progress achieved by STEMD validates its capability to successfully leverage the spatiotemporal dependencies, aiding in the accurate estimation of objects that are challenging to detect in the setting of single-frame input.
Moreover, our method demonstrated superiority even when compared to the previous best-performing multi-frame approach, CenterFormer <cit.>. By employing our method, we were able to achieve a notable improvement of 4.8% mAPH on LEVEL_2 for the cyclist category.
Additionally, our method exhibited improved performance when using shorter input sequences. Notably, even with a 4-frame input, our approach outperformed both the current single-frame and multi-frame methods. Specifically, we observed superior performance compared to PV-RCNN++ and CenterFormer, with improvements of 3.1% and 1.7% mAPH on LEVEL_1.
Furthermore, it is important to highlight that our method is an end-to-end solution, distinguishing it from certain high-accuracy but two-phase methods <cit.>.
Overall, the results in Table <ref> highlight the efficacy of STEMD in effectively integrating temporal information from point cloud sequences, leading to enhanced detection accuracy compared to both single-frame and multi-frame methods.
§.§ Ablation Studies
We conducted ablative analyses to verify the effectiveness and characteristics of our processing pipeline. In the absence of special instructions, we use 4-frame point cloud input unless otherwise specified in this part. We report the APH on LEVEL_2 for vehicle, pedestrian, and cyclist categories and the mAPH metric for comparison.
Effect of key components.
In Table <ref>, we investigate the effect of each added component in our method on the setting of 4-frame sequence input.
* As can be seen from the 1^st row and 2^nd row in Table <ref>, after changing the processing of multi-frame point cloud input from direct concatenation to operating it as a sequence, the results suggest an improvement and reaches 70.1% LEVEL_2 mAPH. This indicates that compared with the simple concatenation, the feature-level enhancement by the ConGRU block benefits the network in capturing the spatial-temporal dependencies. This also motivates us to take full advantage of the point cloud sequence and capture the spatial-temporal dependencies at not only the feature level but also the query level.
* When the spatial-temporal graph attention network is applied, we can further improve the result of vehicle, pedestrian, and cyclist by 0.3%, 2.28%, and 3.58% LEVEL_2 mAPH respectively.
The improvement on the pedestrian and cyclist categories is more significant.
We attribute this improvement to the interaction patterns exploited by the graph attention network, which effectively captures valuable information regarding the social behaviors exhibited by objects. This mechanism proves influential in complex scenarios, displaying a pronounced impact on categories such as pedestrians and cyclists.
* As reported in the fourth row of Table <ref>, the strategy of temporal query recollection yields an improvement of 0.72% mAPH, indicating the effectiveness of supplementing the queries initialized with boxes predicted by the encoder of current frame with the final prediction in last frame.
* As shown in the last row of Table <ref>, by introducing the IoU regularization term in the loss function of FFN prediction head, the performance will increase by 0.88% LEVEL_2 mAPH.
Effect of our multi-frame design.
As shown in Table <ref>, we show the improvement of our multi-frame design compared to the baseline <cit.> using simple point cloud concatenation strategy under different settings of frame length.
We can observe that the baseline does not perform well in modeling the long-term relations among frames as the performance slightly deteriorates when the number of concatenated frames increases from 8 to 16. In contrast, our proposed STEMD achieves consistent improvements when the frame length increases. Specifically, our proposed STEMD achieves 1.35%, 4.13%, 4.62%, and 4.91% higher mAPH than the multi-frame baseline using point concatenation with 2, 4, 8, and 16 frames. The results demonstrate the effectiveness of our method in leveraging long-term temporal dependencies across multi frames.
Effect of the spatial-temporal graph attention network.
To showcase the effectiveness of the spatial-temporal graph attention network, we conducted additional experiments where we replaced it with conventional multi-head self-attention modules <cit.>. The results, as presented in Table <ref>, demonstrate that the normal multi-head self-attention mechanism performs 0.23%, 2.38%, and 3.09% worse in LEVEL_2 mAPH for cars, pedestrians, and bicycles categories, respectively, compared to the proposed spatial-temporal graph attention network. This indicates that the complex spatial-temporal dependencies between objects are more effectively modeled by the graph structure, which dynamically and contextually captures the relationships, as opposed to the self-attention mechanism that is densely applied across all queries.
Effect of the IoU regularization term.
As shown in Fig. <ref>, we conducted a qualitative comparison between the results of the full model of STEMD (shown on the right side) and the baseline setting without the IoU regularization term (left side). Upon analysis, we observed that the predictions of the baseline model exhibited highly-overlapped bounding boxes, which are highlighted with red circles. In contrast, the full model produced a single matched box for each object.
This observation suggests that the introduction of the regularization term in the full model helps to push other close but unmatched queries away from the best-matched queries during the refinement in the decoder, and consequently leads to less-overlapped box predictions.
Hyper-parameters of graph node selection.
In our approach, we incorporate graph node selection to address the issue of overlapping bounding boxes and associated queries generated by the decoder. This process results in a more streamlined graph structure for downstream graph-based learning. However, it is crucial to determine the appropriate number of filtered bounding boxes, denoted as N_g, as an excessively large value can introduce redundant bounding boxes that impede the effectiveness of graph-based learning. Conversely, setting N_g too small may result in a lower recall rate of queries with respect to the ground-truth boxes. By analyzing Table <ref>, we observe a significant decrease in the recall rate at the 0.5 IoU threshold, dropping from 96.95% to 96.56% when N_g is reduced from 800 to 500. Thus, we find that setting N_g to 800 strikes a balance between recall and redundancy within the graph, thereby leading to the best detection accuracy of 73.75% mAPH.
Hyper-parameters of temporal query recollection.
As shown in Table <ref>, with extra N_res queries initialized with boxes predicted at last frame, both the recall of all queries input for decoder with respect to the ground-truth bounding boxes and the overall mAPH improves. Specifically, the recall increases from 68.89% to 69.75% and from 88.80% to 89.31% at 0.7 and 0.5 IoU thresholds, respectively. But when we further increase the N_res from 300 to 500 or 800, neither the recall nor the detection accuracy improves significantly. Therefore, we can infer the collected queries in last frame could effectively supplement the output of encoder in current timestamp as extra input queries of decoder. And the value of N_res is important because too many recollected queries are helpless and could lead to unnecessary computation load.
Conditional analysis.
To better figure out where our approach brings improvements, we compared STEMD with the baseline that simply concatenates the multi-frame point clouds on objects of different speeds in Fig. <ref>.
We can observe that the baseline model, based on the concatenation of 4 frames, achieves marginal improvement compared with the single-frame detector, and the performance on the slow category even deteriorates. Meanwhile, our proposed STEMD achieves consistent improvements throughout all categories. Especially, our method outperforms the single-frame baseline by 5.3% and 5.9% APH improvement on medium and fast objects, the boost is more significant than that on stationary and slow categories. This observation demonstrates our proposed STEMD effectively captures the temporal information about the motion and behavior of fast-moving objects across different frames.
Effect of the radius used to determine neighbors.
Table <ref> presents the results of an ablation study that investigates the effect of different radii for identifying neighbors of nodes in the spatial-temporal graph attention network.
We can observe that the performance varies with different radii values. For example, for the vehicle class, the accuracy increases from 70.36% (radius 1) to 70.83% (radius 2) and then slightly decreases for radius 3 (70.47%) and radius 4 (69.69%). Similarly, for pedestrians and cyclists, the highest performance is achieved at radius 2, followed by a slight decrease for larger radii.
Based on these findings, we can conclude that the choice of radius has an impact on the performance of the STGA-Net. It appears that a radius setting of 2 is more effective in learning spatial-temporal relations and yields the best overall results across all classes.
§.§ Visualization and Analysis
We present a qualitative analysis highlighting the advantages of our proposed method, STEMD, for the multi-frame 3D object detection task. Specifically, we compare STEMD with the baseline approach <cit.> using concatenated points as input, as shown in Fig. <ref>.
In the first row of Fig. <ref>, we encounter a scene featuring a row of parked vehicles in the upper left area. While the baseline method falls short in detecting these distant and highly-occluded vehicles, our STEMD model successfully localizes them with precision (highlighted by the red circles). Additionally, STEMD significantly reduces the number of false positives and redundant boxes (indicated by the orange circles).
Similarly, in the second scene collected on a city street with densely parked vehicles surrounded by buildings, STEMD demonstrates its superiority in detecting distant and highly-occluded objects. This serves as a testament to the robustness of our method in challenging environments.
Furthermore, in the third row of Fig. <ref>, STEMD accurately detects a vehicle on the right side, despite being highly occluded by other vehicles. This precision in estimation can be attributed to the learned relevant past location information and the coherence of object trajectories captured by your method.
Overall, these examples effectively illustrate the advantages of your proposed STEMD method in precisely locating challenging objects, such as distant and highly-occluded ones, while also reducing the number of false-positive boxes. These improvements are achieved by leveraging spatial-temporal dependencies and learning from past location information.
§.§ Efficiency Analysis
Memory Efficiency.
As mentioned in Sec. <ref>, our method only additionally employs a small set of features derived in last frame, which means that the additional memory required for storing these features is low. Besides, while some methods directly deal with concatenated multi-frame point clouds, the proposed method only takes the point cloud of the current frame as input. This reduces the number of points that need to be processed for detection, resulting in lower memory requirements during inference.
Quantitatively, our method consumes 4170M of GPU memory during inference, which is only slightly higher than the single-frame method's usage of 3470M when the batch size is set to 1. By emphasizing these specific measurements, we can confidently conclude that the STEMD method remains memory efficient.
Computation Efficiency.
In the online multi-frame 3D object detection using STEMD, frames are processed sequentially in a streaming fashion. To avoid redundant computations, a small set of features computed in the last frame is reused. This results in a minor additional computational overhead, primarily from the STGA-Net and ConvGRU modules. Despite the additional computations, STEMD introduces only a small computational overhead. On a single NVIDIA A100 GPU, it achieves a latency of 88 ms, assuming that the features from preceding frames are already available in memory. Compared to the single-frame baseline latency of 70 ms, our method incurs a modest 25% increase in latency while significantly improving detection accuracy. It is worth noting that given the typical LiDAR scan frequency of 10 Hz, our network remains capable of operating in real-time scenarios within its 100 ms computation budget.
§.§ Further Discussions
0
Thorough experiments demonstrate that making the most of spatial-temporal information can significantly improve the performance of 3D object detection.
There are still certain limitations with the current approach. First, the performance of our method is inferior to some state-of-the-art two-phase methods <cit.>.
It is worth noting that the DETR-like paradigm has surpassed the CNN-based detectors and achieved state-of-the-art performance on the 2D object detection task.
We believe the STEMD is just a baseline of the following more powerful DETR-like models for multi-frame 3D object detection. It still has tremendous potential to be explored as many tricks in 2D DETR <cit.> had not been tried in our framework.
Second, our method does not show obvious improvement in objects moving very fast.
We attribute the reason for this result to not considering the motion of fast-moving objects. Although the time interval between each frame is only 100 ms, those objects can move a large distance during this time period. Thus, STGA-Net can not effectively the spatial-temporal dependencies between the fast-moving objects and the neighboring objects.
While our experiments have demonstrated the significant benefits of leveraging spatial-temporal information for 3D object detection, it is important to acknowledge the limitations of our current approach.
One key limitation is that our method falls short in terms of performance compared to some state-of-the-art two-phase methods <cit.>.
It is worth highlighting that the DETR-like paradigm has already surpassed CNN-based detectors and achieved state-of-the-art performance in the 2D object detection task. However, in the context of 3D object detection, our proposed model, STEMD, should be considered as a baseline for future, more powerful DETR-like models. There is still tremendous potential to be explored, as our framework has not incorporated several advanced techniques that have shown promising results in 2D DETR-like models <cit.>.
Another limitation of our method is the lack of obvious improvement in detecting objects that are moving at high speeds. We attribute this result to the fact that we did not explicitly consider the motion of fast-moving objects in our spatial-temporal modeling. Even though the time interval between each frame is only 100 ms, objects moving at high speeds can cover significant distances within this short time period. As a result, STGA-Net, our proposed model, fails to effectively capture the spatial-temporal dependencies between these fast-moving objects and their neighboring objects. In future research, it would be valuable to explore methods that can better handle the detection of fast-moving objects. Techniques such as motion estimation could be incorporated into the spatial-temporal modeling process to capture the dynamics of these objects more accurately. Additionally, investigating the use of higher frame rates or adaptive frame sampling strategies may help alleviate the issue of fast-moving objects being poorly represented in the temporal context.
In all, while our study has demonstrated the potential of spatial-temporal information for 3D object detection, there is still work to be done to improve the performance of our approach. By drawing inspiration from recent advancements in 2D DETR and addressing the challenges posed by fast-moving objects, future iterations of DETR-like models hold great promise for achieving even higher accuracy and robustness in multi-frame 3D object detection tasks.
§ CONCLUSION
In this paper, we have presented STEMD, a novel end-to-end multi-frame 3D object detection framework based on the DETR-like paradigm. Our approach effectively models inter-object spatial interaction and complex temporal dependencies by introducing the spatial-temporal graph attention network, representing queries as nodes in a graph. Additionally, we improve the detection process by incorporating the detection results from the previous frame to enhance the query input of the decoder.
Furthermore, based on the characteristics of 3D inspection tasks, we further incorporate the IoU regularization term in the loss function to reduce redundancy in bounding box predictions.
Through extensive experiments conducted on the Waymo dataset, our framework has demonstrated superior performance in 3D object detection tasks.
The results validate the effectiveness of our proposed approach, showcasing the potential of modeling spatial-temporal relationships between objects in this domain.
§ NOTATIONS
The notations used in this paper are summarized in Table <ref>.
IEEEtran
0
Michael Shell
Biography text here.
John Doe
Biography text here.
Jane Doe
Biography text here.
|
http://arxiv.org/abs/2307.02069v1
|
20230705072440
|
Exact Solution for the Rank-One Structured Singular Value with Repeated Complex Full-Block Uncertainty
|
[
"Talha Mushtaq",
"Peter Seiler",
"Maziar S. Hemati"
] |
physics.flu-dyn
|
[
"physics.flu-dyn",
"cs.SY",
"eess.SY",
"math.OC"
] |
UMN]Talha [email protected],
UMich]Peter [email protected],
UMN]Maziar S. [email protected]
[UMN]Aerospace Engineering and Mechanics,
University of Minnesota,
Minneapolis, MN 55455, USA
[UMich]Electrical Engineering and Computer Science,
University of Michigan, Ann Arbor, MI 48109, USA
In this note, we present a solution method which computes the structured singular value (SSV) for rank-one uncertain systems with a repeated complex full-block uncertainty.
These uncertainty structures arise in many engineering dynamical systems, especially where convection of physical quantities occurs.
For example, in some of the recent works, the repeated complex full-block uncertainty has been used in the SSV analysis of fluid flows to model the convective nonlinearity in the incompressible Navier-Stokes equation (NSE).
Using this repeated full-block structure to model the nonlinear terms has led to an improved understanding of the underlying flow physics.
Generally, when these engineering systems are approximated as rank-one, no existing study presents an explicit solution of SSV.
Previously existing works on rank-one systems only provide explicit solutions of SSV for repeated (real or complex) scalars and/or non-repeated complex full-block uncertainties.
The novel contribution of our work is to provide an explicit solution of SSV for uncertain rank-one systems with a repeated complex full-block uncertainty.
Additionally, the solution provided in this work generalizes to the solutions obtained in previous studies for repeated complex scalars and/or non-repeated complex full-block uncertainties.
We demonstrate our method on a turbulent channel flow model as an example.
In this note, we present an exact solution for the structured singular value (SSV) of rank-one complex matrices with repeated complex full-block uncertainty.
A key step in the proof is the use of Von Neumman's trace inequality.
Previous works provided exact solutions for rank-one SSV when the uncertainty contains repeated (real or complex) scalars and/or non-repeated complex full-block uncertainties.
Our result with repeated complex full-blocks contains, as special cases, the previous results for repeated complex scalars and/or non-repeated complex full-block uncertainties.
The repeated complex full-block uncertainty has recently gained attention in the context of incompressible fluid flows.
Specifically, it has been used to analyze the effect of the convective nonlinearity in the incompressible Navier-Stokes equation (NSE).
SSV analysis with repeated full-block uncertainty has led to an improved understanding of the underlying flow physics.
We demonstrate our method on a turbulent channel flow model as an example.
§ INTRODUCTION
Many engineering systems can be approximated as rank-one systems to gain insights into their leading energetic behavior of higher-order dynamics.
An important aspect of rank-one systems is that we can compute explicit solutions for structured singular value (SSV) without having to compute its bounds, which is beneficial for large-dimensioned system.
Computing SSV for rank-one systems is a well-studied area with some of the prominent results presented in the works <cit.>.
Generally, the upper-bounds of SSV equal SSV for rank-one systems when non-repeated, complex full-block and repeated (real or complex) scalar uncertainties are considered.
The proof of equivalence between the upper-bounds and SSV reveals an explicit expression for SSV for each of the respective uncertainty structures (see Theorem 1 and 2 in <cit.>).
Similar results have been highlighted in <cit.> through the derivation of a convex optimization problem for the generalized SSV, which gives an explicit expression for SSV when the standard SSV problem presented in <cit.> is considered.
In all rank-one studies, the case of repeated complex full-block uncertainties is not addressed.
Such uncertainty structures have physical relevance in systems such as fluid flows, where repeated complex full-block uncertainties have been used to provide consistent modeling of the dynamics
<cit.>.
This type of uncertainty structure is a more general case of complex uncertainties and thus, encompasses the commonly found complex uncertainties in the SSV literature as a special case, i.e., non-repeated, complex full-block and repeated, complex scalar uncertainties.
Any existing software packages, such as MATLAB, do not provide solutions for SSV when the uncertainty has a repeated complex full-block structure <cit.>.
However, our work in <cit.> provides algorithms that can compute bounds on SSV for higher-order systems with a repeated complex full-block uncertainty.
Thus, it is important to devise a solution procedure that yields an explicit solution of SSV for rank-one systems with a repeated complex full-block uncertainty.
In this paper, we will provide an explicit solution of the standard SSV problem in <cit.> for rank-one systems with a repeated complex full-block uncertainty.
Additionally, we will show that the solution for SSV with repeated complex full-block uncertainty generalizes to the complex uncertainties commonly found in the SSV literature.
As an example, we will demonstrate our solution method on a rank-one approximation of the turbulent channel flow model <cit.> and compare it with the bounds of SSV using the algorithms in <cit.>.
This paper focuses on the computation of the structured singular value (SSV) given a feedback-interconnection between a rank-one complex matrix and a block-structured uncertainty.
The rank-one SSV is well-studied with some prominent results given in <cit.>.
A standard SSV upper-bound can be formulated as a convex optimization <cit.>.
This SSV upper-bound is equal to the true SSV for rank-one matrices when the uncertainty consists of repeated (real or complex) scalar blocks and non-repeated, complex full-blocks.
This yields an explicit expression for the rank-one SSV with these uncertainty structures (see Theorem 1 and 2 in <cit.>).
Similar results are given in <cit.>.
Our paper builds on this previous literature by providing an explicit solution to the rank-one SSV problem with repeated complex full-block uncertainty.
This explicit solution is the main result and is stated as Theorem <ref> in the paper.
A key step in the proof is the use of Von Neumann's trace inequality <cit.>.
The repeated complex full-block uncertainty structure contains, as special cases, repeated complex scalar blocks and non-repeated, complex full-blocks.
Hence our explicit solution encompasses prior results for these cases.
The repeated complex full-block uncertainty structure has physical relevance in systems such as fluid flows.
Specifically, this uncertainty structure has recently been used to provide consistent modeling of the nonlinear dynamics <cit.>.
In Section 4, we demonstrate our rank-one solution to analyze a turbulent channel flow model <cit.>.
Our explicit rank-one solution is compared against existing SSV upper and lower bound algorithms <cit.> that were developed for general (not-necessarily rank-one) systems.
§ BACKGROUND: STRUCTURED SINGULAR VALUE
Consider the standard SSV problem for square[We present the square complex matrix case to improve readability of the paper and minimize notation. The general rectangular complex matrix case can be handled by introducing some additional notation.] complex matrices M ∈ℂ^m × m given by the function μ : ℂ^m × m→ℝ as
<cit.>
μ(M) = (minΔ: det(I_m - MΔ) = 0)^-1
where Δ∈ℂ^m × m is the structured uncertainty, I_m is an m × m identity, det(·) is the determinant and · is the induced 2-norm which is equal to the maximum singular value.
Then, μ(M) is the SSV of M.
For the trivial case where M = 0, the minimization in (<ref>) has no feasible point and μ(0) = 0.
In this paper, we will focus on the case where M is rank-one, i.e., M = u v^H for some u, v ∈ℂ^m.
Then, using the matrix determinant lemma, the minimization problem in (<ref>) can be equivalently written as <cit.>
μ (M) = (minΔ: v^HΔ u = 1)^-1.
Hence, for any structured Δ, the determinant constraint in (<ref>) can be converted into an equivalent scalar constraint when M is rank-one.
This scalar constraint is a special case of affine parameter variation problem for polynomials with perturbed coefficients <cit.>.
It is well-known that an explicit solution for μ(M) can be computed for rank-one systems when Δ∈Δ, where Δ is a set of structured uncertainties commonly found in the SSV literature <cit.>:
Δ := {Δ = diag(δ^r_1 I_k_1, …, δ^r_m_r I_k_m_r, δ^c_1 I_k_m_r + 1, …, .
. δ^c_m_c I_k_m_c + m_r, Δ_1, …, Δ_m_C) : δ^r ∈ℝ, δ^c ∈ℂ, .
. Δ_i ∈ℂ^k_m_c + m_r + i× k_m_c + m_r + i}⊂ℂ^m × m.
We will present solution for (<ref>) when Δ∈Δ, where Δ is a set of repeated complex full-block uncertainties defined as
Δ := {Δ = diag(I_r_1⊗Δ_1, …, I_r_n⊗Δ_n) .
. : Δ_i ∈ℂ^k_i × k_i}⊂ℂ^m × m.
This set is comprised of n blocks such that the i^th block, i.e., I_r_i⊗Δ_i, corresponds to a full k_i × k_i matrix repeated r_i times.
Any uncertainty Δ∈Δ reduces to the complex uncertainties commonly found in the SSV literature:
* When k_i = 1 then Δ_i is a scalar, denoted as δ_i.
In this case, the i^th block in (<ref>) corresponds to a repeated complex scalar, i.e., I_r_i⊗Δ_i = δ_i I_r_i,
* When r_i = 1 then the i^th block in (<ref>) corresponds to a (non-repeated) complex full-block, i.e., I_r_i⊗Δ_i = Δ_i.
Explicit rank-one solutions of μ(M) for these special cases are well-known <cit.>.
However, the current SSV literature does not present any explicit rank-one solutions of μ(M) for the repeated complex full-block case, which is a more general set of complex uncertainties, i.e., for any Δ∈Δ.
These uncertainty structures have physical importance in engineering systems such as fluid flows <cit.>, where they have been exploited to provide physically consistent approximations of the convective nonlinearity in the Navier-Stokes equations (NSE).
Therefore, in the next section, we will present an explicit rank-one solution of μ(M) for any Δ∈Δ.
It is important to note that the solutions presented in this paper are not limited to fluid problems and can be used for any other system that has Δ∈Δ.
§ REPEATED COMPLEX FULL-BLOCK UNCERTAINTY (MAIN RESULT)
Consider the problem in (<ref>) for any Δ∈Δ.
We can partition u, v ∈ℂ^m compatibly with the n blocks of Δ∈Δ:
u = [ u_1^H … u_n^H ]^H, v = [ v_1^H … v_n^H ]^H
where u_i, v_i ∈ℂ^k_i r_i.
Note that m = ∑_i = 1^n r_i k_i.
Since, the i^th block is I_r_i⊗Δ_i, we can further partition u_i, v_i based on the repeated structure:
u_i = [ u_i,1^H … u_i,r_i^H ]^H, v_i = [ v_i,1^H … v_i,r_i^H ]^H
where each u_i,j, v_i,j∈ℂ^k_i.
Based on this partitioning, define the following matrices (for i = 1, …, n):
Z_i = ∑_j = 1^r_i u_i,j v_i,j^H∈ℂ^k_i × k_i.
Let M = uv^H be given with u,v ∈ℂ^m and define Z_i as in (<ref>). Then, for any Δ∈Δ, we have
det( I_m - M Δ) = 1 - ∑_i = 1^n Tr( Z_i Δ_i ).
Using the matrix determinant lemma, we have
det(I_m - MΔ) = 1 - v^HΔ u.
Now, using the block-structure of Δ∈Δ and the corresponding partitioning of (u,v ), we can rewrite (<ref>) as
1 - v^HΔ u = 1 - ∑_i = 1^n v_i^H(I_r_i⊗Δ_i ) u_i
= 1 - ∑_i = 1^n [ ∑_j = 1^r_i v_i,j^HΔ_i u_i,j].
Note that the term in brackets is a scalar and hence equal to its trace.
Thus, use the cyclic property of the trace as
∑_j = 1^r_iTr[v_i,j^HΔ_i u_i,j] = ∑_j = 1^r_iTr[u_i,j v_i,j^HΔ_i ]
= Tr[ Z_i Δ_i].
Combine (<ref>), (<ref>) and (<ref>) to obtain the stated result.
Then, for any Δ∈Δ, the problem in (<ref>) can be equivalently written as
μ (M) = ( minΔ_2: ∑_i = 1^r_1 v_i^HΔ_1 u_i + ….
. + ∑_i = r_1 + … + r_m_C - 1 + 1^r_1 + … + r_m_C v_i^HΔ_m_C u_i = 1)^-1
where Δ = diag(
Δ_1, …, Δ_m_C) and u_i,v_i ∈ℂ^m_i are partitioned vectors for i = 1, …, (r_1 + … + r_m_C).
Now, using the scalar constraint in (<ref>), we can compute Δ by exploiting the properties of trace in the following series of steps:
Step 1. Apply trace on both sides of the equation:
Tr(∑_i = 1^r_1 v_i^HΔ_1 u_i + … + .
. ∑_i = r_1 + … + r_m_C - 1 + 1^r_1 + … + r_m_C v_i^HΔ_m_C u_i = 1)
Step 2. Use the cyclic and distributive property of trace:
Tr([∑_i = 1^r_1 u_i v_i^H] Δ_1 + … + .
. [∑_i = r_1 + … + r_m_C - 1 + 1^r_1 + … + r_m_C u_i v_i^H] Δ_m_C) = 1.
Step 3. Collect the terms:
Tr(diag(∑_i = 1^r_1 u_i v_i^H,
…, ∑_i = r_1 + … + r_m_C - 1 + 1^r_1 + … + r_m_C u_i v_i^H) .
. diag( Δ_1, …, Δ_m_C) ) = Tr(Z Δ) = 1
Step 4. Finally, compute esvd(Z) = U Σ̂ V^H.
Then, Δ = 1/c V U^H and consequently, c = Tr(Σ̂)
to satisfy Tr(Z Δ) = 1.
Here, esvd(·) is the “economy-sized" singular value decomposition of a matrix, thus, U and V are matrices containing the orthonormal column bases such that U^H U = V^H V = I and Σ̂ is a diagonal matrix containing non-zero singular values.
Given the structure of Z, Tr(Σ̂) = ∑_i=1^m_CTr(Σ'_i), where each Σ'_i is a diagonal matrix containing non-zero singular values of each of the matrices ∑_i = 1^r_1 u_i v_i^H, …, ∑_i = r_1 + … + r_m_C - 1 + 1^r_1 + … + r_m_C u_i v_i^H.
Thus, for any Δ∈Δ, we have
Δ = 1/Tr(Σ̂) = 1/∑_i=1^m_CTr(Σ'_i).
Now, we must show that Δ = 1/Tr(Σ̂) is the minimum solution of the optimization problem in (<ref>) for all Δ∈Δ.
Lemma <ref> is used to provide an explicit solution for rank-one SSV with repeated complex full-blocks.
This is stated next as Theorem <ref>.
Let M = u v^H be given with u,v ∈ℂ^m and define Z_i as in (<ref>).
Then,
μ(M) = ∑_i = 1^n ∑_j = 1^k_iσ_j( Z_i ),
where σ_j( Z_i ) is the j^th singular value of Z_i.
Define c = ∑_i = 1^n ∑_j = 1^k_iσ_j( Z_i ) to simplify notation.
The proof consists of 2 directions: (i) μ(M) ≥ c and (ii) μ(M) ≤ c.
(i) μ(M) ≥ c: Let Z_i = U_i Σ_i V_i^H be the singular value decomposition (SVD) of Z_i.
Note that Σ_i = diag(σ_1(Z_i), …, σ_k_i(Z_i)).
Then, define Δ∈Δ with the blocks Δ_i = 1/c V_i U_i^H (i=1,…, n).
Thus, by Lemma <ref>, we have
det(I_m - M Δ) = 1 - ∑_i = 1^n Tr[Z_i Δ_i ].
Now, substitute the SVD of Z_i in (<ref>) and use the cyclic property of trace:
det(I - M Δ) = 1 - ∑_i = 1^n Tr[Σ_i V_i^HΔ_i U_i ]
= 1 - 1/c∑_i = 1^n Tr[Σ_i ] = 0.
Hence Δ causes singularity and Δ_2 = 1/c.
Thus, the minimum Δ in (<ref>) must satisfy Δ≤1/c and consequently, μ(M) ≥ c.
(ii) μ(M) ≤ c: Let Δ∈Δ be given with Δ < 1/c.
Von Neumann's trace inequality <cit.> gives:
| Tr[Z_i Δ_i ] | ≤∑_j = 1^k_iσ_j(Z_i) σ_j(Δ_i)
where | · | is the absolute value.
Note that Δ < 1/c implies that each block satisfies the same bound: σ_j(Δ_i) < 1/c.
Hence, (<ref>) implies
| Tr[ Z_i Δ_i ] | < 1/c∑_j = 1^k_iσ_j(Z_i).
Next, using Lemma <ref> and the inequality in (<ref>), we get
det(I_m - M Δ) = 1 - ∑_i = 1^n Tr[ Z_i Δ_i]
> 1 - 1/c∑_i = 1^n [∑_j = 1^k_iσ_j(Z_i) ] = 0.
Hence, any Δ∈Δ with Δ < 1/c cannot cause (I_m - M Δ) to be singular.
Thus, the minimum Δ in (<ref>) must satisfy Δ≥1/c and consequently, μ(M) ≤ c.
For the special cases r_i = 1 and k_i = 1, the solution (<ref>) yields μ(M) = ∑_i = 1^nu_i_2 v_i_2 and μ(M) = ∑_i = 1^n |v_i^H u_i|.
These special cases correspond to solutions presented in previous works for non-repeated, complex full-block and repeated complex scalar uncertainties, respectively <cit.>.
The solution (<ref>) generalizes to the complex uncertainties given in Δ as the following:
* When r_i = 1 for Δ∈Δ.
Then, each Z_i = u_i v_i^H in (<ref>) is a rank-one matrix.
Thus, SVD of Z_i = u_i/u_i_2u_i_2 v_i_2 v^H_i/v_i_2, where u_i/u_i_2 and v_i/v_i_2 are the left and right orthonormal column bases, and Σ_i = u_i_2 v_i_2 is the singular value of Z_i.
So, μ(M) can be computed as
μ(M) = ∑_i = 1^n ∑_j = 1^k_iσ_j( Z_i ) = ∑_i = 1^n Tr(Σ_i)
= ∑_i = 1^n u_i_2 v_i_2
which is the solution presented in <cit.> for (non-repeated) complex full-blocks.
* When k_i = 1 for Δ∈Δ.
Then, each
Z_i = v_i^H u_i ∈ℂ in (<ref>) is a scalar.
Thus, the SVD of Z_i = v_i^H u_i/|v_i^H u_i| |v_i^H u_i|, where the left and right orthonormal column bases are v_i^H u_i/|v_i^H u_i| and 1, Σ_i = |v_i^H u_i| is the singular value of Z_i.
So, μ(M) can be computed as
μ(M) = ∑_i = 1^n ∑_j = 1^k_iσ_j( Z_i ) = ∑_i = 1^n Tr(Σ_i)
= ∑_i = 1^n |v_i^H u_i|
which is the solution presented in <cit.> for repeated complex scalars.
§ RESULTS
In this section, we demonstrate our SSV solution method for repeated complex full-blocks using a rank-one approximation of the turbulent channel flow model.
As validation, we will compare our solutions against general upper and lower-bound algorithms that have been developed for (not necessarily rank-one) systems with repeated complex full-block uncertainties.
The upper and lower-bounds are computed using Algorithm 1 (Upper-Bounds) and Algorithm 3 (Lower-Bounds) in <cit.>, which are based on Method of Centers <cit.> and Power-Iteration <cit.>, respectively.
Generally, these algorithms can be used for higher rank problems (see for example <cit.> and <cit.>).
Additionally, we will compare the computational times between each of the methods to demonstrate the computational scaling of the rank-one SSV solution.
§.§ Example
The spatially-discretized turbulent channel flow model described in <cit.> has the following higher-order dynamical equation:
E(κ_x, κ_z) ϕ(y) = A(Re, κ_x, κ_z) ϕ(y) + B(κ_x, κ_z) f(y)
ζ(y) = C(κ_x, κ_z) ϕ(y)
f(y) = Δζ(y)
where Re is the Reynolds number, κ_x and κ_z are the streamwise (x) and spanwise (z) direction wavenumbers resulting from the discretization, and the wall-normal direction is given by y.
Here, the states ϕ(y) ∈ℂ^4N and outputs ζ(y) ∈ℂ^9N are given by the following:
ϕ(y) = [u(y)^T,v(y)^T,w(y)^T,p(y)^T]^T,
ζ(y) = [(∇ u(y))^T,(∇ v(y))^T,(∇ w(y))^T]^T
where u(y) ∈ℂ^N, v(y) ∈ℂ^N, w(y) ∈ℂ^N and p(y) ∈ℂ^N are streamwise, wall-normal and spanwise velocities, and pressure, respectively.
Also, N is the number of collocation points in y to evaluate the system, ∇∈ℂ^3N × N is the discrete gradient operator and E(κ_x, κ_z) ∈ℂ^4N × 4N, A(Re, κ_x, κ_z) ∈ℂ^4N × 4N, B(κ_x, κ_z) ∈ℂ^4N × 3N and C(κ_x, κ_z) ∈ℂ^9N × 4N are the matrix operators.
Readers are referred to the work in <cit.> for details on the construction of matrix operators.
It is important to note that Δ for this system has a repeated complex full-block structure that results from the approximate modeling of the quadratic convective nonlinearity as,
f(y) = [[ -u_ξ^T 0 0; 0 -u_ξ^T 0; 0 0 -u_ξ^T; ]] [[ ∇ u; ∇ v; ∇ w; ]] = (I_3 ⊗ -u_ξ^T) ζ(y)
where f(y) ∈ℂ^3N is the forcing signal and u_ξ∈ℂ^3N × N is the velocity gain matrix.
Thus, the last row of equations in (<ref>) describes the nonlinear forcing with Δ = I_3 ⊗ -u_ξ^T as the uncertainty matrix.
Further details are given in <cit.> about the Δ modeling.
The input-output map of the system in (<ref>) is given by,
H(y;Re, ω,κ_x,κ_z) = C(iω E - A)^-1B,
where ω is the temporal frequency.
H(y;Re, ω,κ_x,κ_z) in (<ref>) is, in general, not a rank-one matrix. However, for demonstration of our method, we will approximate H(y;Re, ω,κ_x,κ_z) as a rank-one input-output operator at each of the temporal frequencies ω for a fixed Re, κ_x and κ_z—as is commonly done for such analyses <cit.>:
M_ω_i = σ_i a_1_i b_1_i^H, i = 1, …,N_ω
where N_ω are the total number of frequency points, σ_i ∈ℝ_≥ 0 is the maximum singular value of a matrix, and a_1_i∈ℂ^9N and b_1_i∈ℂ^3N are the left and right unitary vectors associated with σ_i, respectively.
Then, the rank-one SSV is given by μ_max = max_i μ(M_ω_i), where μ(M_ω_i) is computed using (<ref>).
§.§ Numerical Implementation
We will compute μ_max on an N_κ× N_κ× N_ω grid of space and temporal frequencies.
The spatial frequencies (wavenumbers) κ_x and κ_z are both defined on a log-spaced grid of N_κ = 50 points in the interval [10^-1.45, 10^2.55].
This grid is denoted G_κ. The temporal frequency ω is defined on a grid G_ω:={ c_p G_κ}, where c_p is the wave speed, i.e., speed of the moving base flow (see <cit.> for details).
Wave speeds are chosen as c_p ∈{5, 10, 15, 18, 22 } resulting in N_ω = 250 points in the temporal frequency grid.
Additionally, we will fix Re = 180 and N = 60 for all computations and use MATLAB's command to loop over temporal frequencies.
§.§ Discussion
We can see in figure <ref> that μ_max values are qualitatively and quantitatively similar (within 5 %) to the upper-bounds of μ_max obtained from Algorithm 1 in <cit.>.
In fact, μ_max values are “identical" to the lower-bound values of μ_max (not shown here), i.e., values match up to 1%.
Thus, the algorithms converge to the optimal solutions obtained from our method.
Furthermore, computing μ_max is relatively fast as compared to obtaining its bounds (see figure <ref>).
Each point on the plot in figure <ref> represents the average[The CPU times are averaged over 10 data-points. We used an ASUS ROG M15 laptop with Intel 2.6 GHz i7-10750H CPU with 6 cores, 16 GB RAM, and an RTX 2070 Max-Q GPU for run time computations.] CPU time for a single data-point (ω,κ_x,κ_z) at each of the state dimensions.
All computational times include CPU time for SVD of H to obtain a rank-one approximation.
From the plot in figure <ref>, the upper-bound and lower-bound solutions have a time complexity of 𝒪(N^2.83) and 𝒪(N^1.525), respectively.
Meanwhile, computing μ_max from our method has a time complexity of 𝒪(N^1.28).
§ CONCLUSION
This work presents a method which gives an explicit solution of SSV for rank-one systems when Δ∈Δ.
Additionally, the solution obtained from this method generalizes to the repeated complex scalar and/or non-repeated complex full-block uncertainties, which yields SSV solutions found in some of the previous rank-one studies <cit.>.
In future work, we would like to explore similar arguments to the ones presented here for rank-one systems to compute SSV for higher-order systems, especially when Δ∈Δ.
This work presents an exact solution of SSV for rank-one complex matrices with repeated, complex full-block uncertainties.
The solution obtained from this method generalizes previous exact solutions for the repeated complex scalar and/or non-repeated complex full-block uncertainties <cit.>.
We illustrated the proposed method on a turbulent channel flow model.
In future work, we would like to explore similar arguments to the ones presented here for rank-one complex matrices to compute SSV for general (not necessarily rank-one) complex matrices, especially when Δ∈Δ.
§ ACKNOWLEDGEMENTS
This material is based upon work supported by the ARO under grant number W911NF-20-1-0156.
MSH acknowledges support from the AFOSR under award number FA 9550-19-1-0034,
the NSF under grant
number CBET-1943988 and ONR under award number N000140-22-1-2029.
IEEEtran
|
http://arxiv.org/abs/2307.01187v1
|
20230703175244
|
SAMAug: Point Prompt Augmentation for Segment Anything Model
|
[
"Haixing Dai",
"Chong Ma",
"Zhengliang Liu",
"Yiwei Li",
"Peng Shu",
"Xiaozheng Wei",
"Lin Zhao",
"Zihao Wu",
"Dajiang Zhu",
"Wei Liu",
"Quanzheng Li",
"Tianming Liu",
"Xiang Li"
] |
cs.CV
|
[
"cs.CV",
"cs.AI"
] |
This paper introduces SAMAug, a novel visual point augmentation method for the Segment Anything Model (SAM) that enhances interactive image segmentation performance. SAMAug generates augmented point prompts to provide more information to SAM. From the initial point prompt, SAM produces the initial mask, which is then fed into our proposed SAMAug to generate augmented point prompts. By incorporating these extra points, SAM can generate augmented segmentation masks based on the augmented point prompts and the initial prompt, resulting in improved segmentation performance. We evaluate four point augmentation techniques: random selection, maximum difference entropy, maximum distance, and a saliency model. Experiments on the COCO, Fundus, and Chest X-ray datasets demonstrate that SAMAug can boost SAM's segmentation results, especially using the maximum distance and saliency model methods. SAMAug underscores the potential of visual prompt engineering to advance interactive computer vision models.
Segment Anything Model, Visual Prompt, Prompt Augmentation, Foundation Models.
SAMAug: Point Prompt Augmentation for Segment Anything Model
Haixing Dai†1,
Chong Ma†2,
Zhengliang Liu†1,
Yiwei Li1,
Peng Shu1,
Xiaozheng Wei2
Lin Zhao1,
Zihao Wu1,
Dajiang Zhu4,
Wei Liu5,
Quanzheng Li3,
Tianming Liu1,
and Xiang Li3
1School of Computing, University of Georgia, Athens, GA, USA
2School of Automation, Northwestern Polytechnical University, Xi’an, China
3Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA.
4Department of Computer Science and Engineering, University of Texas at Arlington, TX, USA
5Department of Radiation Oncology, Mayo Clinic, USA
† These authors contributed equally to this paper.
Corresponding author: Xiang Li (email: [email protected]).
August 1, 2023
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Progress in large language models <cit.> is inspiring a significant focus on the development of foundational models in computer vision <cit.>. Among these, the Segment Anything Model (SAM) <cit.> stands out as a novel interactive model specifically designed for image segmentation tasks and subsequent downstream applications.
SAM represents a paradigm shift towards more flexible, versatile segmentation models. The SAM model ushers in a new approach to image segmentation by allowing interactive, point-based user inputs to guide the segmentation process. Although this strategy presents a remarkable shift from more traditional image segmentation methods, it also poses challenges in terms of segmentation accuracy and quality due to the inherent limitation of single-point inputs. The precision and richness of the input point can significantly influence the quality of the resulting segmentation.
To address this constraint, we propose SAMAug, a novel visual point augmentation method for generating additional segmentation masks using SAM. SAMAug prompts SAM to produce new masks by providing initial points and augmented points selected using one of several proposed point augmentation techniques. By incorporating these additional points, SAM generates augmented segmentation masks that expand upon the initial mask. We evaluate four point augmentation approaches: random selection, maximum difference entropy, maximum distance, and a saliency model.
Through extensive experiments on the COCO, Fundus, and COVID datasets, we demonstrate that SAMAug can improve SAM's performance, especially when using the maximum distance and saliency model point selection approaches. Our results showcase the potential of using visual prompts and pretrained models for data augmentation. SAMAug represents an important step towards prompt-based augmentation methods for computer vision that can reduce data requirements and improve model performance.
This study makes the following key contributions:
* We develop a novel visual point augmentation framework, SAMAug, for generating additional point prompts without extra manual operations for SAM.
* We propose a visual prompt augmentation theory based on the invariance in prompt selection.
* We tested four techniques for augmentation with experiments on three different datasets and identified the most effective augmentation techniques (maximum distance and saliency).
Through this research, we aim to contribute to the ongoing efforts to refine and enhance interactive image segmentation models in the era of large vision models.
§ RELATED WORK
§.§ Segment Anything Model
The task of image segmentation typically involves identifying which pixels in an image belong to a specific object, thereby enabling applications ranging from photo editing to scientific image analysis. Recently, the introduction of the Segment Anything Model (SAM) has revolutionized the approach to image segmentation. SAM represents a significant departure from traditional segmentation models. It is a general, promptable model designed to adapt to specific tasks, much like the prompting systems used in natural language processing models.
Segmentation models were categorized into two broad classes <cit.>: Interactive segmentation required user input to iteratively refine a mask, whereas automatic segmentation required a significant amount of manually annotated objects for training. The latter approach enabled the segmentation of specific object categories defined in advance, such as cats or chairs.
SAM <cit.> unifies these two classes of approaches. It is a single model capable of performing both interactive and automatic segmentation. Its interface is designed to handle a wide array of segmentation tasks, facilitated by an appropriate prompt for the model. One of SAM's most distinguishing features is its training on an unprecedentedly large dataset of over 1 billion masks, collected as part of the Segment Anything project. This diverse, high-quality dataset enables SAM to generalize to new types of objects and images beyond what it observed during training.
The Segment Anything project also introduced the Segment Anything 1-Billion mask dataset (SA-1B) <cit.> , which represents the largest ever segmentation dataset. The dataset was developed iteratively, employing SAM to interactively annotate images and then using the newly annotated data to update SAM, thereby improving both the model and the dataset. SAM represents a paradigm shift in image segmentation, moving from task-specific models towards a more flexible, generalizable model that reduces the need for specialized knowledge and resources.
§.§ Research Work on SAM Model
SAM is a very powerful fundamental model for image segmentation that can achieve zero-short transfer, and users can achieve segmentation goals on a variety of images without additional training. Therefore, there are many studies trying to apply SAM to different types of images.
According to medical image, due to the indistinct distinction between the foreground and the background of the image, the image resolution is not high enough, and the boundary of the target object is blurred, the task of segmentation is difficult. Conventional segmentation models often require a lot of specific labeling work and training to achieve good results. There are many works verifying the effect of SAM on medical image datasets <cit.>, <cit.>. The conclusion is that for the segmentation of some specific tissues and organs, the effect of SAM is excellent, and the overall accuracy rate is also good, but SAM is easy to fail when the segmentation target is small, dense or winding. Experiments also show that the segmentation quality of SAM can be improved by adjusting prompts <cit.>, <cit.>. Therefore, exploring the prompt tuning may be a solution to the problems of SAM in medical image segmentation.
SAM can also fail in other different areas. In the field of agriculture, the segmentation of crops sometimes confuses crops with land, and the segmentation of animals often results in segmentation results that do not include the entire body of the animal <cit.>. In remote sensing, SAM can segment objects with regular shapes, but it fail to identify smaller or indiscernible targets. Therefore, at the current stage, SAM is not really able to segment all objects, and plenty of work is needed to optimize and improve the performance of SAM.
§.§ Prompt Augmentation for Large Foundational Models
Prompt-based learning is a strategic approach in the realm of machine learning, targeting the extraction of valuable insights from large pre-trained models <cit.>. This technique revolves around optimizing either a sequence of tokens (discrete prompts) <cit.> or a sequence of vectors (continuous prompts) <cit.>, depending on the application. The primary advantage of prompt-based learning lies in its efficiency: it allows researchers and practitioners to effectively exploit the potential of large models without the need for exhaustive fine-tuning, thereby saving substantial computational resources.
It is possible and potentially beneficial to augment prompts <cit.>. This strategy, distinct from data augmentation, involves the generation and optimization of prompts rather than data, thereby further harnessing the capabilities of these models. The rationale behind this approach is to direct the model's focus and improve its performance in specific tasks, making it a promising direction for enhancing large-scale models in various domains.
AutomateCoT <cit.> is an NLP method that automatically augments and selects prompts to enhance the reasoning ability of large language models. It tackles the problem of manually writing chain-of-thought exemplars, which requires significant human effort. AutomateCoT generates pseudo-chains for each input question and then prunes incorrect ones based on the correctness of the predicted answers. It finally selects an optimal combination of exemplars using a variance-reduced policy gradient strategy.
The proposed SAMAug shares a similar motivation with AutomateCoT. However, SAMAug focuses on the computer vision domain by generating visual prompts for SAM, a large vision model, to produce augmented segmentation masks. In contrast, AutomateCoT generates additional textual prompts for language models to solve NLP tasks such as arithmetic and symbolic reasoning. Despite the differences in modality and tasks, both SAMAug and AutomateCoT demonstrate the value of harnessing model capabilities through refined prompt design.
§.§ Visual Prompt
Prompts are widely applied in the region of natural language processing (NLP) as the form of additional content in text. Pre-trained models leverage prompts to enhance performance for a variety of tasks, thereby achieving efficacy comparable to that of full parameter fine-tuning.
Inspired by the impressive success of prompts in NLP, researchers have explored applying visual prompts in computer vision. Visual prompts come in various forms, including key points, bounding boxes, segmentation masks, etc. For example, VPT <cit.> introduces a small part of learnable parameters into input space and keeps entire pre-trained Transformer backbone frozen. By training these parameters regarded as visual prompts, VPT reaches better performance for diverse visual tasks. Carefully designing prompts and feeding them into all kinds of pre-trained models is recognized as prompt engineering which not only boosts performance efficiently but also assists to solve many difficulties. Convpass <cit.> combines vision transformer (ViT) with convolution bypass prompts to mitigate computational stress. ViPT <cit.> implements prompt-tuning method to increase prior knowledge in base-line model in order to learn more information and structures from limited large-scaled data.
SAM is a typical large-scale computer vision model that can benefit from visual prompts, which can enhance the model's ability to understand objects and component in images. By employing trainable prompts, SAM helps to segment chicken in agriculture field <cit.>. Recent research also implements SAM with abundant visual point or box prompts in abdominal organs segmentation tasks <cit.>, which is a common challenge in medical area. In summary, visual prompts enhance the ability of pre-trained models to analyze visual data and have become a powerful mechanism for many visual models in variety of tasks.
§.§ Sampling Methods
Sampling methods has been a fundamental problem in the field of statistics <cit.> and deep learning <cit.>, and is classified into two main categories: Probability Sampling and Non-Probability Sampling. Since each sample has an equal chance of being selected, probability sampling is generally used and statistically more likely to choice a sample that is representative of the total. Probability sampling is further divided into the following four subtypes: (1). Simple random sample, (2). Stratified sample, (3). Cluster sample, (4). Systematic sample. Specifically, simple random sample is the most commonly employed sampling technique that gathers a random selection from the entire population, with each unit having an equal chance of selection. The other sampling techniques, stratified sampling, systematic sampling, and cluster sampling, involve the division of the population into subsets or clusters based on the attributes of the samples, followed by sampling from these subsets. These sampling strategies have significantly improved the understanding of the overall population.
Within the field of computer vision, a single pixel or a small region of an image can be regarded as a sample unit. Sampling strategies that are based on the properties of these sample units are widely employed in various topics, including image alignment <cit.>, image segmentation <cit.>, and saliency prediction <cit.>. Specifically, a common approach for image alignment involves the initial division of multiple images into distinct subsets based on pixel attributes, followed by the alignment of key points that have been sampled from different images. Similarly, the attributes of individual pixels can be directly sampled for binary classification which can be used to determine whether a given pixel belongs to the foreground or background category. This method is generally used for image segmentation. By sampling pixels based on the importance of the semantic information they contain, salient information in an image can be identified, forming the basis of saliency detection. Furthermore, pixels that exhibit high levels of saliency in an image are closely associated with their semantic information, and thus saliency is often considered an essential criterion when selecting similar sample points in various applications.
§ METHODOLOGY
§.§ SAM Framework and Premise for Point Prompt Augmentation
The basic framework of SAM is shown in Figure <ref>. First, SAM uses ViT <cit.> to encode the incoming image and the visual prompts. Then the encoded image and prompts are passed on to the mask decoder to predict the segmentation masks. The decoder employs prompt-based self-attention and cross-attention, allowing attention to flow both from the encoded prompt to the image and vice versa to update the encoded image and prompt features. If the prompts are point(s) or bounding box(es), they will be represented as positional encodings by SAM, where the positional information of the prompt will be converted into a 256-dimensional vector and a flag indicating foreground/background. The positional encoding performs sinusoidal mapping for input coordinates, so that the mapped vector can be used to train coordinate-based MLPs in a transformer-based structure <cit.>. If the prompts are texts, SAM will use the CLIP model <cit.> to encode the prompt. If the prompts are dense, such as masks, they will be directly convolved with the image embeddings and
summed element-wise.
The mechanism of the prompting process of SAM makes it potentially sensitive to the location and number of point prompts, as demonstrated in the right panel of Figure <ref>. Firstly, the bi-directional cross-attention module within the decoding path heavily relies on the coordinate of the point prompt(s) represented by the positional encoding(s). As the image embeddings will also be updated accordingly, point prompts from different locations, even with similar semantic contexts, will possibly lead to differences in the final segmentation masks. Secondly, as SAM is trained for performing general segmentation rather than any specific tasks, it could not accurately deal with (whether suppress or enhance) segmentation boundaries, especially when the prompt information is limited. Thirdly, as pointed out in the SAM documentation<cit.>, a single prompt such as only one point prompt will cause the segmentation ambiguity issue where the prompt can be corresponds to multiple valid masks and the SAM model cannot differentiate which mask the prompt is actually referring to. While the SAM model has adopted an ambiguity-resolving module to generate multiple segmentation masks and rank them based on confidence scores, using multiple prompts will certainly address the issue.
Thus, in this work, we propose the point prompt augmentation scheme based on the premise that: 1) There exists invariance in the point prompt selection process by the human user, where the selected point prompt is from only one of the many possible coordinates of the user's prior knowledge about the image. Specifically, similar to the rotation- or shift- invariance which is expected in a classic image processing setting <cit.>, we also expect SAM will produce the same segmentation results based on the manifestation of our target in the form of prompts, regardless of where exactly the point prompt locates at. 2) As experimental results have demonstrated that SAM cannot achieve such invariance, we will need to perform prompt augmentations to guide the model to better understand our target for the segmentation. The most intuitive approach will automatically generate additional points sampled from the initial segmentation masks by SAM (i.e., segmentation results using a single human-provided point prompt). The sampling-from-mask approach essentially regards the initial segmentation mask as a trustworthy yet potentially incomplete result compared with our target; and aims to improve the results by leveraging the prompt selection invariance and adding the extra point prompts. These automatically-generated point prompts can be sampled via specific strategies, which are described below.
§.§ Point Prompt Augmentation by Random Sampling
In the random selection method, we aim to add one point to the initial mask. To accomplish this, we randomly select one point from the available candidate points. These candidate points are determined based on the initial mask, which represents the current state of the mask.
§.§ Point Prompt Augmentation by Max Entropy Criterion
In the max entropy point method, we aim to select a point that maximizes the difference in entropy with respect to the initial point. To calculate the entropy, we use a 9x9 grid centered at each candidate point within the initial mask. The entropy of each candidate point is computed based on the distribution of pixel intensities within this grid. The point with the maximum difference in entropy, compared to the initial point, is chosen for addition to the mask.
§.§ Point Prompt Augmentation by Max Distance Criterion
In the distance-based method, we search for a point that is at an optimal distance from the initial point. The distance between points is measured using a suitable metric, such as Euclidean distance. Using this distance metric, we calculate the distance between each candidate point and the initial point within the mask. The point that minimizes this distance, while also satisfying certain criteria or constraints, is selected for inclusion in the mask. So in many cases, to get better segmentation results, we need to provide enough point prompts to the SAM, so that the decoder can learn more complete morphological features. This process is very similar to the image augmentation in the segmentation task to increase the number of training and training objects to improve the segmentation performance. That is why our model is called SAMAug.
§.§ Point Prompt Augmentation by Saliency Map
Visual Saliency Transformer (VST) <cit.> is implemented for salient object detection (SOD), which helps select the second point from the SAM masks. This transformer-based model receives images and extracts a saliency map and a boundary map for objects that are visually prominent. In this study, we only focus on VST for RGB-SOD. The input SAM mask is expanded by 10 pixels in each direction, or it remains constant if a direction reaches the edge of the image. This process generates the new input image for VST. Based on the output saliency map from VST, the second point is selected randomly within the range of the detected object. Then, SAM produces a saliency augmentation result with the help of a dual-point prompt.
§.§ Dataset
§.§.§ COCO Dataset
The COCO dataset <cit.> is notable for its size and diversity. It contains more than 200,000 images and over 80 object categories, including common objects like people, animals, vehicles, and household items. The dataset is designed to capture objects in realistic contexts, making it suitable for training and evaluating models that need to understand objects in complex scenes.
In addition to object labels, the COCO dataset also includes pixel-level segmentation masks for object instances, which makes it useful for tasks like instance segmentation. Moreover, each image in the dataset is accompanied by multiple human-generated captions, enabling research on image captioning and language understanding.
§.§.§ Fundus Dataset
Fundus dataset <cit.> refers to a collection of images of the human fundus, which is the interior surface of the eye opposite the lens. These images are typically obtained through a procedure called fundus photography, where a specialized camera captures detailed images of the retina, blood vessels, and other structures within the eye.
Fundus datasets are often used in medical research and computer vision applications for tasks such as retinal disease diagnosis, automated screening, and image analysis. These datasets may include images from various sources, such as healthy individuals or patients with specific eye conditions or diseases like diabetic retinopathy, macular degeneration, or glaucoma.
The availability of large-scale fundus datasets has facilitated the development and evaluation of machine learning and deep learning models for automatic detection, classification, and segmentation of retinal abnormalities. These models aim to assist healthcare professionals in the early detection and management of eye diseases, potentially improving patient outcomes and reducing the burden on healthcare systems.
§.§.§ Chest X-Ray Dataset
The COVID CXR dataset <cit.> is a carefully selected compilation of chest X-ray images, meticulously gathered for COVID-19 research and analysis. This dataset comprises a significant number of chest X-ray images acquired from individuals suspected or confirmed to have COVID-19. The dataset showcases a diverse array of images, illustrating distinct manifestations and abnormalities linked to the disease, including lung opacities, infiltrates, and other distinctive findings. For our experiments, we utilize a subset of this dataset consisting of 3616 COVID-19 CXR images. These images are accompanied by their respective ground truth lung masks, providing precise delineation of lung regions for further analysis and evaluation.
§.§ Implementation Details
The goal of this work is to perform visual point augmentation in a segmentation task. The input to our method consists of the initial point and the original image. We adopt the Segmentation Anything Model (SAM) as our base model to obtain the initial mask.
After obtaining the initial mask from SAM, we proceed to generate additional points using the methods described in the Method section. Specifically, we apply four different methods: random selection, maximum difference entropy, max distance method and saliency point augmentation. These methods allow us to augment the initial point by selecting relevant points for further inclusion in the mask.
For each method, we generate an augmented point based on the given initial point and the original image. Once the augmented points are generated, we feed both the initial point and the augmented points as point prompts to SAM. By incorporating these points into the segmentation process, SAM generates an augmented mask that accounts for the additional points. This augmented mask provides a refined and expanded segmentation output, incorporating the relevant information from the initial point and the augmented points.
The visual point augmentation experiments were conducted on an A100 graphics card within the PyTorch environment. The COCO dataset took approximately 4 hours to complete, while the Fundus and Chest X-ray datasets required approximately 20 minutes each. These varying runtimes reflect the dataset sizes and complexities, with the COCO dataset being the largest and most time-consuming, followed by the Fundus and Covid datasets.
§ RESULTS
In this section, we provide a detailed discussion of the performance evaluation of our proposed methodologies in comparison with SAM. We analyze the results across different datasets (COCO, Fundus, and Covid CXR) and categories, focusing on the impact of varying the number of points used in segmentation.
§.§ Comparison of the Proposed Strategies with SAM
Table <ref> presents an exhaustive comparison between our proposed methods and the base SAM model across various categories within the COCO, Fundus, and Covid CXR datasets.
In the COCO dataset, specifically in the "person" category, the base SAM model yields a Dice score of 0.4552. When our augmentation methods are employed, improvements across the board are observed. The Random method pushes the score slightly to 0.4594, while Max Entropy gives a more noticeable lift to 0.4677. The Max Distance method significantly boosts the score to 0.51, and the Saliency method further extends the improvement, achieving the highest score of 0.5535. Similar trends of improvement are observed across all other categories in the COCO dataset. For instance, in the "all" category, the base SAM model's Dice score of 0.6005 is exceeded by all our methods, with the Max Distance method achieving the highest score of 0.6514.
For the Fundus dataset, our proposed methods also outperform the base SAM model's Dice score of 0.7662, with the Max Distance method once again leading the pack with a score of 0.8022.
In the Covid CXR dataset, while the base SAM model achieves a Dice score of 0.5047, all our methods improve on this score. The Random method, interestingly, turns out to be the most effective, achieving a score of 0.5242, a trend which is unique to this dataset.
§.§ Ablation Study: Experiments with the Extra Point Prompt
Table <ref> presents the results of our ablation study, where we specifically look at the cases where two points were used for segmentation across the COCO, Fundus, and Covid CXR datasets.
In the COCO dataset, using two points, all our augmentation techniques improve on the base SAM's Dice score of 0.6005. The Random method takes it to 0.6137, Max Entropy to 0.6212, Max Distance to 0.6514 (the highest), and the Saliency model to 0.6314.
The Fundus dataset shows a similar trend. The Random method improves the base SAM's Dice score of 0.7662 to 0.7939, while the Max Distance method achieves the highest score of 0.8022.
In the Covid CXR dataset, the base SAM model's Dice score of 0.5047 sees improvements with all our methods, with the Random method producing the highest score of 0.5242.
These results underline the benefit of adding an extra point in the prompting process. In all cases (COCO, Fundus, Covid), adding an extra point has improved the performance over the base SAM method, regardless of the specific augmentation method used.
§.§ Ablation Study: Experiments with Multiple Point Prompts
We further delve into the impact of using more points, specifically three and five points, as input prompts for segmentation, as detailed in Table <ref>.
In the COCO dataset, using three points, all our methods improve on the base SAM's Dice score of 0.5852, with the Max Distance method achieving the highest score of 0.6473. Using five points, a similar trend is observed, with the Max Distance method achieving the top score of 0.6501.
The same pattern follows in the Fundus dataset, with our methods improving the base SAM's scores both for three and five points, and the Max Distance method providing the highest scores.
However, in the Covid CXR dataset, while the use of three points sees an improvement over the base SAM model's score (0.5014), the use of five points results in a slight decrease in performance for our methods compared to the base SAM's score of 0.502.
Adding more points generally improves performance but with diminishing returns. In the COCO dataset, performance improves from two to three points but then plateaus from three to five points. In contrast, for the Covid dataset, adding more points from two to three and five points has led to a decrease in performance, suggesting a trade-off between the number of points and the performance gain. Despite these nuances, the maximum distance method has often delivered among the highest Dice scores across these datasets and point counts, although it performs worse than the Random strategy on the Covid dataset.
§ DISCUSSION AND FUTURE WORK
The introduction of SAMAug represents a significant advancement in the utilization of large foundational models for interactive image segmentation. By ingeniously generating and leveraging augmented point prompts, our approach succeeds in harnessing the potential of the Segment Anything Model (SAM) to a greater extent, enhancing its segmentation performance on a variety of datasets.
Despite its efficacy, there remains considerable room for improvement and extension. An interesting direction for future research would be to develop more sophisticated point augmentation strategies, beyond the four techniques proposed here. This could involve exploring the use of other metrics or creating hybrid approaches that combine the strengths of multiple techniques. It might also be beneficial to investigate adaptive point augmentation methods that can adjust the strategy based on the specific characteristics of each image or segmentation task.
A promising direction is the integration of SAMAug with active learning frameworks. The interactive nature of SAM makes it a suitable candidate for active learning, where the model can iteratively select the most informative points for augmentation. This approach could potentially accelerate the training process and further improve the model's performance.
In conclusion, SAMAug showcases the potential of prompt augmentation for enhancing the capabilities of large foundational models. It opens up a wealth of possibilities for future research, from devising advanced augmentation techniques to exploring cross-domain applications and integrating with other machine learning paradigms. The exploration of these directions promises to drive further advancements in the field of interactive image segmentation and beyond.
§ CONCLUSION
In this work, we have proposed and evaluated SAMAug, a novel visual point augmentation method designed to enhance the segmentation performance of SAM. This method, which includes the application of four point augmentation techniques, significantly boosts the information SAM receives, resulting in richer and more detailed segmentation masks.
Our work underscores the value of input augmentation for advancing interactive image segmentation. SAMAug enriches the information provided to SAM, enabling more intricate segmentation details and enhanced model performance overall. However, SAMAug does have some limitations. The performance gains vary across datasets and categories, highlighting the need for further research to determine optimal augmentation approaches for different applications. Moreover, adding too many input points can decrease accuracy, suggesting that prompt engineering requires balancing informativeness and noise.
In the future, we plan to expand SAMAug's capabilities, explore other ViT backbones <cit.>, explore other prompt-based augmentation methods, and further delve into user studies evaluating the quality and usefulness of SAMAug's augmented segmentations. This work marks a significant step towards transforming computer vision through advanced interactive image segmentation techniques and refined prompt engineering for vision foundation models.
IEEEtran.bst
|
http://arxiv.org/abs/2307.03278v1
|
20230706202834
|
Opinion formation by belief propagation: A heuristic to identify low-credible sources of information
|
[
"Enrico Maria Fenoaltea",
"Alejandro Lage-Castellanos"
] |
physics.soc-ph
|
[
"physics.soc-ph",
"cs.SI",
"stat.AP"
] |
Neural network decoder for near-term surface-code experiments
Barbara M. Terhal
August 1, 2023
=============================================================
^1Physics Department, University of Fribourg, Chemin du Musée 3, 1700 Fribourg, Switzerland
^2Group of Complex Systems and Statistical Physics, Physics Faculty, Havana University, La Habana, CP 10400, Cuba
With social media, the flow of uncertified information is constantly increasing, with the risk that more people will trust low-credible information sources. To design effective strategies against this phenomenon, it is of paramount importance to understand how people end up believing one source rather than another. To this end, we propose a realistic and cognitively affordable heuristic mechanism for opinion formation inspired by the well-known belief propagation algorithm. In our model, an individual observing a network of information sources must infer which of them are reliable and which are not. We study how the individual's ability to identify credible sources, and hence to form correct opinions, is affected by the noise in the system, intended as the amount of disorder in the relationships between the information sources in the network. We find numerically and analytically that there is a critical noise level above which it is impossible for the individual to detect the nature of the sources. Moreover, by comparing our opinion formation model with existing ones in the literature, we show under what conditions people's opinions can be reliable. Overall, our findings imply that the increasing complexity of the information environment is a catalyst for misinformation channels.
keywords: Belief propagation, Signed network, learning, Heuristic, node classification, Phase transition
§ INTRODUCTION
Social media have revolutionized how people communicate and inform themselves, becoming the primary source of information for most users. However, content in social media is often published without the intermediation of experts, thus increasing the risk of spreading unreliable news <cit.>.
With the advance of the research on how opinions propagate in networks <cit.>, how news goes viral <cit.>, and how polarization affects public opinion <cit.>, methods have been proposed to limit the spread of incorrect information and help users distinguish true news from fake news <cit.>. Despite these efforts, misinformation still threatens society. Indeed, it is not trivial for the general public to distinguish between reliable and unreliable content. A pertinent example was the recent media circus around the war in Ukraine: people have faced an overwhelming amount of both reliable news and news whose only purpose was to make propaganda. This has generated widespread confusion with dangerous implications for society. Therefore, it is crucial to understand how people form their opinions and provide them with strategies for properly getting informed in a world full of often conflicting information channels.
In addition to studies in psychology <cit.> and sociology <cit.>, opinion formation is widely investigated by quantitative methods, such as network science <cit.>, game theory <cit.>, or statistical physics <cit.>. In particular, models have been proposed to study opinion contagion in social networks <cit.>, and how such contagion leads to the emergence of cohesion <cit.> or fragmentation <cit.> in society. Recently, a model has been proposed in <cit.> that differs from the models mentioned above by including potential relationships (both positive and negative) between the subjects on which an individual, in the absence of social influence, tries to form an opinion. In fact, such connections are a fundamental component of the world of social media and information channels <cit.>. The authors in <cit.> discuss a cognitively and computationally simple heuristic rule by which an individual navigates the network of subjects and gradually forms an opinion about each of them. Then, based on an a priori hidden truth, the proportion of correct and incorrect opinions is measured. Other heuristic rules were then investigated in <cit.>. Such works do not consider the probabilistic and uncertain nature of human decisions <cit.>, but assume that an individual's opinions have a binary character (e.g., trust or distrust of an information channel). However, people's beliefs and opinion systems can be more nuanced.
Within the same framework of these models (which we explain in more detail below), here we allow for uncertainties in the opinion formation process by studying a new heuristic rule whose mechanism is based on the well-known belief propagation algorithm <cit.>. In particular, we consider an individual who wants to learn something about a topic (e.g., the motivations for the Ukrainian war) and observes a network of information sources- some reliable, others unreliable- discussing that topic. While it is immediate to understand that two sources promote conflicting content, it is more difficult for the layman to choose which one to trust. These sources of information can be websites, Youtube channels, or newspapers whose interconnections represent their mutual relationship of trust or distrust. For example, two Youtube channels with the same view on a subject will tend to collaborate; conversely, if they have an opposite opinion, they will diss each other. In the literature, such a relationship system is formalized as a signed network with positive and negative links <cit.>. Moreover, the relationships between sources of information should follow the logic of Heilder's balance rule <cit.>: if two sources have a link of the same sign with a third source, they are connected to each other by a positive link; conversely, if they have links of the opposite sign with a third source, they are connected by a negative link (this is a formalization of the proverb “my enemy's friend is my friend"). While this is certainly an oversimplification of the more complex existing spectrum of relationships between different sources of information, this structure has been observed in data sets of various political and social systems <cit.>. Nevertheless, there may be several deviations: for example, for some reason, two sources of information that generally agree may disagree on a minor issue, and the individual is misled because he/she observes a negative connection between them; or the topic addressed is so complex that it is difficult to determine which is the true relationship between the sources, increasing the risk that the individual will observe incorrect link signs. To account for this, we assume that the link signs in the network of information sources are at odds with the balance rule with some small probability that we refer to as noise.
Assuming that, initially, the individual has information about the reliability of only one source of information in the network, his/her goal is to identify which of the sources are reliable by observing the (noisy) relationships between them. This task can be cognitively very expensive, so humans must rely on simple heuristics when dealing with complex issues <cit.>. Hence, we study a probabilistic and local heuristic rule (i.e., where only local knowledge of the source network is required) inspired by the belief propagation algorithm. This algorithm is useful to efficiently solve inference problems by passing local messages <cit.>. It has important applications in statistical physics <cit.>, error-correcting coding theory <cit.>, computer vision <cit.>, and artificial intelligence <cit.>. In this paper, we recast the belief propagation algorithm in the framework of opinion formation.
Specifically, we investigate how the noise affects the performance- in terms of the number of information sources correctly labeled as reliable or unreliable- of our belief propagation rule and show that, in addition to realistically describing how humans reason, it outperforms existing heuristics in the literature. However, we find analytically that there is a critical noise level beyond which, even with our belief propagation approach, one cannot extract information from the source network.
Then we show numerically that, independently of the noise, the performance of any local rules inevitably decreases as the size of the sources network increases. This adverse outcome can be limited if the individual forms his/her opinions by aggregating information from multiple sources. We also extend our model and the models in <cit.> to study the effect of repeated opinion updating, showing that it is beneficial only for local heuristic rules involving the above-mentioned multiple sources aggregation.
Our findings suggest that, because we have limited information capabilities, our opinions can be highly sensitive to the complexity of the information environment, thus calling for a policy to provide less cluttered communication to the general public. Moreover, by giving insights into what conditions increase the risk of people relying on low-credible sources, our work can help design new tools or improve existing ones to prevent the proliferation of false beliefs.
§ OPINION FORMATION PROCESSES
§.§ The basic framework
Consider a system composed of N interconnected sources of information. For simplicity, we assume that each source can be of two types, reliable or unreliable, with equal probability. We represent these two types with positive and negative signs, respectively. If two information sources of the same sign are connected, they have a positive relationship (mutual trust); if they are of the opposite sign, they have a negative relationship (mutual distrust). In this way, the information sources define an undirected signed network with N nodes V={i, i ∈ [1,N]}. Each node i can be positive or negative, i.e., s_i ∈{-1,1}, and the relationships between nodes are defined by a weighted graph where, whenever there is an edge between two nodes i and j, this edge has an associate weight J_ij=s_is_j. This can be equivalently encoded into a weighted matrix J, whose elements J_ij are zero if the sources are non-interacting, while +1 if they are of the same type and -1 if they are of a different type.
In the following, we consider only the random network topology with average degree k ∈𝐍, which means that two nodes have a link with probability p=k/(N-1). Therefore, the average number of links E is given by E=pN(N-1). We will refer to this network as the ground truth network G(V,E,J).
Now, consider an external observer who, having only partial knowledge of the nodes in G, tries to figure out which sources of information are reliable and which are not, i.e., forms opinions about them. Formally, The opinion o_i on a node i is a two-dimensional vector such that:
o_i := [[ P(s_i=+1); P(s_i=-1) ]]
where the first element represents the degree of confidence (or the probability), according to the external observer, that the information source i is reliable, while the second element is its complement, i.e. P(s_i=-1)=1-P(s_i=+1). We assume that the observer initially knows only one node i^*, such that o_i^*=(1,0) if s_i^*=+1 and o_i^*=(0,1) if s_i^*=-1. We refer to this node as seed node.
Then, we introduce a parameter r ∈ [0,1/2] that represents the noise on the relation matrix J. Specifically, the observer is not able to "see" the real J but observes a relation matrix J^o such that
∀_(i,j) J^o_ij = {[ J_ij 1-r; -J_ij r ].
In this way, r serves to include the structural noise present in any real system in which it is not straightforward to determine the type of relationship between two sources. When r=0, the matrix J and the observed matrix J^o coincide, so the observer knows with certainty which are the true relationships between the information sources. When r=1/2, however, J^o is not informative about the links' signs in G, as J and J^o are not correlated.
Given a ground truth network G(V,E,J), a seed node i^*, and a noise level r generating J^o, the observer's goal is exploiting the information from i^* and J^o to form opinions that reflect as accurately as possible the true types of the nodes in G (see Fig.<ref>a). At the two extremes, the formation of an accurate opinion is either impossible (J^o uninformative with r=1/2) or trivial (J^o=J with r=0). Indeed, starting from the seed node, the observer can assign each node the type according to Π_iJ_π_i-1π_i, where π is any path connecting the seed node to the target node, and π_i is the i-th node of the path π.
In the following, we introduce different opinion formation processes and study their performance as r varies. As a measure of process performance, we use the resulting average overlap q between the expected value (from the observer's point of view) of the sign of node i and its true value s_i:
q := 1/N∑_i=1^N ⟨ o_i⟩ s_i,
where ⟨ o_i⟩:= P(s_i=+1)-P(s_i=-1). The overlap is one when the observer infers with certainty the correct sign of each node of G and is zero in the extreme case where the observer fails to extract any information about G.
§.§ The optimal Bayesian
Before introducing the opinion formation processes of an external observer able to handle only local information, we briefly describe the Bayesian approach, already examined in <cit.>. This will be useful as a conceptual introduction to the belief propagation algorithm.
Let us denote by {s} the set of all possible configurations of node signs in G, and by s a specific configuration. The conditional probability P(J^o|s) that, given a configuration s, a specific J^o is observed, is
P(J^o|s)=∏_(i,j)∈ EΨ(J^o_ij|J_ij),
where the product is over all the connected nodes in G and Ψ is the probability of observing J^o_ij given J_ij. From Eq. <ref>, Ψ(J^o_ij|J_ij)=1-r if J^o_ij=J_ij, and Ψ(J^o_ij|J_ij)=r otherwise. The Bayes rule gives the conditional probability P(s|J^o) that, given observations J^o, the true configuration of the nodes' signs in G is s:
P(s|J^o)=P(J^o|s) P(s)/∑_s∈{s}P(J^o|s) P(s),
where P(s) is the a priori distribution of the nodes' signs in G, and the normalization is given by the sum over all possible configurations.
From Eq. <ref>, the opinion o_i about node i is obtained from the probability P(s_i=+1) that node i is positive. Denoting by {s^i} the set of all configurations in which s_i=+1, we can write:
P(s_i=+1)=∑_s∈{s^i}P(s|J^o).
With this probability, we can also compute the average overlap as in Eq. <ref>.
The Bayesian approach described in this section is the optimal way to process available information <cit.>. Nevertheless, its computation is hard and its application is unfeasible from a cognitive point of view. Indeed, it requires the knowledge of all the configurations in {s}: since each node can be in two states (positive or negative), the number of configurations is 2^N. When N is large enough, this is a computationally prohibitive number even for computers.
In the following, we propose a heuristic opinion formation process where, contrary to the Bayesian approach, the amount of information to be processed is local and reasonable for a human observer.
§.§ Belief Propagation and opinion formation
We now introduce an opinion formation process, based on the well-known belief propagation (BP) algorithm <cit.>, where the observer walks through the network starting from i^* and gradually forms an opinion on each of the remaining nodes i i^*. The BP algorithm is based on the so-called message passing. We first introduce the message m_i → j from node i to node j as a two-component vector
m_i→ j = ([ m_i → j^+; m_i → j^- ]),
where the first element can intuitively be understood as the observer's confidence stemming from the information about node i that node j is positive; similarly, the second element indicates the observer's confidence that node j is negative. The message from node i to node j is defined recursively as the aggregation of all messages toward i without the one coming from j (see Fig.<ref>b):
m_i→ j=ϕ_J_ij(i,j) ∏_z∈∂ i ∖ jm_z → i.
Here, ∂ i∖ j is the set of nodes, excluding j, connected to i in G. Notice that the product of the messages is a Hadamard product <cit.>, i.e., a component-wise product. ϕ_J_ij(i,j) is called the compatibility matrix between node i and node j and is defined as
ϕ_J_ij(i,j)=
[ 1-r r; r 1-r ] if J^o_ij=+1
ϕ_J_ij(i,j)=
[ r 1-r; 1-r r ] if J^o_ij=-1
and the product between this matrix and the vectors in Eq.<ref> is the standard matrix product.
The compatibility matrix must be interpreted as the Bayesian information about the sign of the link between node i and node j and is the local analogue of Eq.<ref>. Specifically, the terms in the diagonal of ϕ(i,j) represent the probability that nodes i and j have the same sign given the observed value of J^o_ij; Similarly, the terms in the anti-diagonal represent the conditional probability that i and j have opposite signs.
Finally, the observer forms its opinion o_i on node i by multiplying all its incoming messages:
o_i= ∏_z ∈∂ im_z→ i/Z,
where ∂ i is the complete set of nodes connected to i in G, and Z is the normalization given by
Z:=(∏_z ∈∂ im_z→ i^+)+(∏_z ∈∂ im_z→ i^-).
Now, in the line of thought of <cit.>, we can define our BP process of opinion formation. Since at the beginning of the process the observer knows no nodes except the seed node i^*, at time t=0 we set m_i→ j=(1/2,1/2) for any i,j i^*, and m_i→ i^*=o_i^* for any i. The process continues as follows:
* at each time step t, choose a random node i with o_i=(1/2,1/2) that has at least one neighbor on which the observer has an opinion different than (1/2,1/2);
* compute the messages toward i, i.e. m_j→ i for any j∈∂ i, as in Eq. <ref>;
* the observer's opinion o_i on i is computed as in Eq. <ref>;
* return to step 1.
This process is repeated until t=N-1, when the observer has an opinion on each node. In reality, however, people occasionally re-evaluate their beliefs. We can include this re-evaluation in the model by running the opinion formation process even for t ≥ N so that the observer evaluates more times the same node. For t ≥ N, the first step of the process is modified as follows:
* at each time step t ≥ N, choose a random node i i^*.
This process is repeated until a desired time t is reached.
In this way, opinion formation undergoes two different phases: when t<N, the observer covers the entire information source network by exploring only the unknown nodes; when t≥ N, the observer can jump randomly on any nodes, as all nodes have already been explored in the past. When the observer passes over the node i several times, it means that he/she is updating his/her opinion about i. Such an updated opinion is based on more information than the opinion formed for the first time, as in the former case the messages coming from all i's neighbors have already been computed in previous steps. The value τ=t/(N-1) thus represents the average number of times an opinion is updated, and we shall refer to it as the observer's thinking time.
We want to study the average overlap q as a function of noise r, network size N, and thinking time τ. A related model, involving the classification of nodes in a network into distinct groups based on network topology observation, is known in the literature as the stochastic block model. Its equilibrium state, corresponding to the infinite time case, has been extensively studied using the cavity method from statistical mechanics <cit.>. It is worth mentioning that, for finite-size networks, the cavity method is equivalent to belief propagation. Nevertheless, although our approach bears similarities to the stochastic block model, it incorporates the relationship between groups through the sign of the links, whereas the stochastic block model considers the different probability of link presence between elements of different groups: For an individual observer browsing a network of information sources, they are a very different problem.
It is now worth dwelling on the interpretation of our BP opinion formation mechanism. In fact, at first glance, one might think that the computations described in this section are too intricate to be performed in our everyday reasoning. Here are some remarks to explain the connection between the belief propagation algorithm and human reasoning. First, let us elucidate the role of the compatibility matrix in Eq.<ref>. When we observe the current relationship between two sources of information, we are aware, based on our experience, that it gives insights into the true relationship between the two parties but, at the same time, also that such insights can be misleading.
For example, consider the case of anti-vax people: how likely is it that two individuals, both anti-vax (thus with J^o_ij=+1), also have the same general worldview (i.e., J_ij=+1)? The compatibility matrix answers this question by incorporating the observer's awareness of the reliability of the observed relations.
In this case, it is well known that supporting conspiracy theories about vaccines is strongly correlated with supporting populist political beliefs and distrusting of elites and experts <cit.>.
Naturally, this is not always the case: assuming that the noise is r=0.1, this is true 90% of the time. In Bayesian reasoning, this data represents the likelyhood of J_ij=+1 or J_ij=-1 given a fixed J^o_ij. In turn, this is used by the observer to infer the posterior probability that J_ij=+1 or J_ij=-1 given the observed J^o_ij, corresponding to the elements of ϕ_J_ij(i,j).
Once the posterior distribution of the link's sign connecting a node i to a node j is known, Eq.<ref> corresponds to aggregating all the information on node i from its neighbors. Thus, by combining the information on i with the probabilities in ϕ_J_ij(i,j), the observer can extract information about j, with the support of the balance rule postulate (i.e., two nodes have a positive relationship if they have the same sign and a negative relationship otherwise). The final opinion on a node is then the aggregation of all messages directed to that node.
Unlike in the optimal Bayesian approach, in the BP approach, the observer needs only local information. Indeed, to form an opinion on a node, the observer must only know its neighbors. Hence, our BP heuristic is a feasible strategy for a human observer with limited computation capabilities. Moreover, there is strong behavioral and physiological evidence that the brain represents probability distributions and performs local Bayesian inference <cit.>. Therefore, it is a plausible hypothesis that our BP mechanism may occur even spontaneously during individuals' opinion formation process.
§.§ Other heuristic processes
As a benchmark, we also study and extend other processes proposed in the literature. In particular, we focus on the so-called majority rule (MR) and random neighbor (RN) processes <cit.>.
§.§.§ Majority rule process
With the basic framework described above, the MR process is defined as follows <cit.>. At time t=0, the opinion on the seed node is o_i^*; while for each i i^*, the initial opinion is o_i=(0,0). The following steps are iterated:
* at each time step t, choose a random node i with o_i=(0,0) that has at least one neighbor on which the observer has an opinion different than (0,0);
* for any j ∈∂ i such that o_j (0,0), compute its vote v_j:
v_j=
+J_ij^o o_j=(1,0)
-J_ij^o o_j=(0,1)
* if ∑_jv_j>0, then o_i=(1,0). If ∑_jv_j<0, then o_i=(0,1). If ∑_jv_j=0, then o_i=(1,0) or o_i=(0,1) with equal probability;
* return to step 1.
These steps are repeated until all opinions are different from (0,0), that is, until t=N-1.
§.§.§ Random Neighbor process
At time t=0, the RN process has the same initial settings as the MR process. Then, it proceeds as follows <cit.>:
* at each time step t, choose a random node i with o_i=(0,0) that has at least one neighbor on which the observer has an opinion different than (0,0);
* chose a random node j ∈∂ i such that o_j (0,0).
* if J^o_ij=+1, then set o_i=o_j. If J^o_ij=-1, then set o_i=(1,1)-o_j;
* return to step 1.
Again, these steps are repeated until t=N-1.
We introduce the thinking time τ also for MR and RN processes. When τ>1, the first step is modified for both as follows:
* at each time step τ≥ 1, choose a random node i i^*.
The next steps, instead, are the same as before and processes continue until a desired τ.
§.§ Interpretations and differences between the proposed opinion formation processes
To elucidate the structural differences between the three heuristics presented in this paper, some remarks are in order.
First, in the RN process, the observer forms his/her opinions on a node i by using the information from only one of its neighbors, whereas in both the MR and BP process the observer aggregates the information from all i's neighbors. Hence, for the RN process, we expect a lower overlap q (given the same amount of noise). On the other hand, the observer with an RN strategy demands the least cognitive and computational effort, while the cost of the BP and MR strategies grows with the network's average degree ⟨ k ⟩. Then, the difference between the aggregate information of the MR and that of the BP processes lies in the weight assigned to the information from each i's neighbor. Specifically, in the MR process, the observer gives the same weighs to all i's neighbors' votes; in our BP process, instead, a message from a i's neighbor with |⟨ o⟩| close to one (i.e., with small uncertainty), is more informative and thus has a larger weight than a message from a i's neighbor with |⟨ o⟩| close to zero (i.e., with higher uncertainty). Therefore, the BP process should outperform the MR process.
Second, in the MR and RN process, the opinion on a node can only be in two states: either (1,0) or (0,1). So, in this case, the overlap in Eq.<ref> is simply the difference between the average number of nodes where the observer’s opinion matches the ground truth and that where it is opposite. Instead, in the BP process, the opinion is probabilistic (see Eq. <ref>) and admits a wider spectrum of possibilities. However, the interpretation of the overlap q is the same as in the case of the MR or RN mechanism provided that, at the end of each realization of the process, the observer is forced to fix the value of each node using the marginal values as the decision criteria. In practice, for any i, the observer chooses o_i=(1,0) with probability P(s_i=+1), and o_i=(0,1) otherwise.
Last but not least, in both the MR and RN cases, opinion formation occurs with the observer unaware of the noise in the system. As we discussed in section <ref>, instead, the observer in the BP process exploits the knowledge of the noise to compute the posterior distributions in the compatibility matrix ϕ_J_ij(i,j). To obtain this knowledge, the observer has to undertake more extensive searches, and it is reasonable to assume that this involves more cognitive effort.
In summary, all three processes proposed in this paper require having only local information while exploring the network of information sources. However, the RN process is the least expensive but also the least accurate of the three; in contrast, the BP process requires the most effort but we expect it to pay off with the best performance; the MR process lies in between.
§ RESULTS AND DISCUSSION
To perform our analysis, it is necessary to consider the a priori distribution P(s) of the nodes' signs in G. Since in each realization of the source network, s_i is +1 or -1 with equal probability, we have P(s)=1/2^N. This induces the wrong idea that we are facing a highly complex model (NP-complete optimization problem, or spin glass like <cit.>). However, in our analytical calculations, we use the fact that it is computationally and statistically equivalent to a simpler model that, using the physics terminology, belongs to the category of ferromagnetic models <cit.>[The relationship between the noise r and the temperature T=1/β is given by r(β)=e^-β/2cosh(β). It is a particularity of the current model that the parameter r defines both the disorder of the system (through Eq.<ref>) and its temperature. Disordered models can be hard to treat, especially if the disorder is high and the temperature is low. However, in our case, r=0 produces a T=1/β = 0 temperature but in a fully ferromagnetic model, which is easy because it has no disorder. In the other extreme, r=0.5 produces a fully disordered model but at T=∞, which is again trivial. Furthermore, the relationship between disorder and temperature in this model puts it on top of the so-called Nishimori line <cit.>, for any r. This means that the model passes from an uninformative phase (with q∼ 0) to a ferromagnetic phase (with q>0) without crossing any spin-glass phase.]. In particular, with the following transformation
∀ _(i,j):s_i=-1 [(s_i=1) and ∀_j ∈∂ i(J_ij=-J_ij)]
we redefine all nodes to be of type +1 while changing the J_ij to have the same statistic as in Eq.<ref>. Hence, the transformed ground truth network G has s_i=1 for any i, and the fraction of misleading links observed by the individual is r. In this way, the average overlap in Eq.<ref> is the sum of the opinion's expected values divided by N. In the physics literature, this is often called a gauge transformation <cit.>.
In the following sections, we start by studying the equilibrium solution (i.e., for τ→∞) of the three heuristic rules (in the RN case, we obtain an analytical solution even for a finite thinking time). Finally, with numerical simulations, we compare their performances for any τ.
§.§ Equilibrium belief propagation process
We first study analytically the properties of the BP process at the equilibrium, i.e., when τ→∞.
Let us rewrite the message updating equation (<ref>) in the following normalized form:
m_i→ j^+=1/Z^*∑_s=1,2ϕ^+(s)∏_z∈∂ i ∖ jm_z → i(s),
where ϕ^+(s) is the s-th element of the first row of matrix ϕ_J_ij(i,j) in Eq.<ref> (we denote the second row as ϕ^-), m_z→ i(s) is the s-th element of the m_z→ i vector, and Z^* is the normalization constant given by
Z^*:=∑_(a=±)∑_s=1,2ϕ^a(s)∏_z∈∂ i ∖ jm_z → i(s).
In this way, we have m^-_i→ j=1-m^+_i→ j. Now, let us denote the message in Eq.<ref> as a function of ϕ^+ and all messages m_z→ i for z∈∂ i ∖ j, i.e. m^+_i→ j≡ f(ϕ^+, m_z→ i, z∈∂ i ∖ j). The distribution of m_i→ j^+ can be written recursively as
P(m_i→ j^+)=𝐄_J^o[ ∫∏_z∈∂ i ∖ j dm^+_z→ i P(m_z→ j^+) δ(m_i→ j^+-f)],
where the expected value is over the observed relation matrix J^o. At the equilibrium, for N sufficiently large, messages for every neighbour of node i are equal, i.e. m^+_z → i = m^+_z' → i≡ m^+ for z,z' ∈∂ i. Assuming that the number of neighbors of i is exactly k, the expected value of m^+_i→ j can be written as
∫ dm^+ P(m^+)m^+=
𝐄_J^o[∫∏_^k-1 dm^+ f(m^+,ϕ^+)P(m^+)].
This equation is obtained by multiplying Eq.<ref> by m^+, integrating over all its possible values, and then applying the Dirac delta on the right-hand side. Since, for t→∞, the overlap is q=2m^+-1, Eq.<ref> can be solved numerically to obtain the equilibrium overlap q as a function of noise r. A trivial solution is when no information on nodes' signs can be retrieved from the messages interactions, i.e., when all messages are 1/2. By performing a stability analysis of this trivial solution, we find that there exists a phase transition at a critical noise value r_c below which the infinite-time overlap is non-zero. To show this, let us expand Eq.<ref> around m^+=1/2. In particular, for some small ϵ, we set m^+=1/2+ϵ and m^-=1/2-ϵ. In this way, using ∫ dm^+P(m^+)=1 and writing explicitly the function f, Eq.<ref> becomes
1/2+ϵ=1/Z^*𝐄_J^o[1/2^k-1+(k-1)[ϕ^+(1)-ϕ^+(2)]ϵ].
Noting that the partition function is Z^*=1/2^k-2 (as ϕ^+(1)+ϕ^+(2)=1) and 𝐄_J^o[ϕ^+(1)-ϕ^+(2)]=(1-2r)^2, the condition for which Eq.<ref> has exactly one fixed point is
(1-2r)^2<1/k-1 r_c=1/2-1/2√(k-1).
This stability analysis is equivalent to the replica symmetric computation of the transition temperatures in disorder systems <cit.>. When r>r_c, there is only one solution to Eq.<ref> corresponding to the one with non-informative messages and hence with q=0. On the other hand, when r<r_c, the overlap is different from zero and, since we consider a positive seed node, we have q>0. Fig.<ref>a shows the phase diagram of the equilibrium overlap in the BP process in the [k,r] parameter space. It shows that numerical simulations are consistent with Eq.<ref>.
This result implies that, when N is sufficiently large, there exists a noise threshold above which an observer that lingers on the network for an infinite time is unable to extract any information about the sources' reliability. However, when r<r_c, the overlap limit is non-zero and the observer's opinions are not random. Note that, with the standard assumption that the BP algorithm determines the correct marginals probabilities <cit.>, the above result is true for any algorithm: when the noise is larger than r_c, no heuristic rules or algorithms can detect the real nature of the information sources.
§.§ MR and RN processes
As with the BP process, because of the strong non-linearity of the MR process, we treat analytically only its equilibrium state. With a stability analysis analogous to that performed for the BP heuristic, we again find that, for N large enough, there is a noise threshold above which the individual's opinions for τ→∞ are random (not correlated with the ground truth). This critical noise is given by (detailed calculations are shown in Appendix A):
r_c=1/2-2^k-2/∑_i=⌊k/2⌋+1^k ki(2i-k).
However, this result is valid only in the large k limit. Whether there is a real phase transition for small average degrees is an open question. Nevertheless, even for small k, Eq.<ref> gives us a tipping point beyond which the majority rule is no longer efficient in extracting information from the source network, as shown in Fig.<ref>b.
Given the simplicity of the opinion formation rule of the RN process, we can solve it analytically even for finite τ and N. Let us first show the results for τ=1, i.e., t=N-1, already known in the literature. Through a master equation approach, it is possible to show that, given a network size N and a noise r, the expected value of q is given by <cit.>
⟨ q(N,r)⟩^RN_τ=1= Γ(N+1-2r)/NΓ(N)Γ(2-2r),
which, as expected, simplifies to 1 for r=0, and to 0 for r=1/2. Moreover, it decreases rapidly with r, as widely discussed in <cit.>. By expanding Eq.<ref> for large N, we have
⟨ q(N,r)⟩^RN_τ=1∼ N^-2r,
showing that, for r>0 and τ=1, the overlap goes to 0 in the N→∞ limit. Hence, an observer who navigates the source network for a short time (τ=1) using the RN rule does not get an encouraging result: his/her opinions would tend to be totally random for large network sizes, regardless of how small is the noise (excluding the r=0 case).
With a similar approach, we can also study the case τ>1, showing that the observer's performance does not improve. In particular, we can write the expected value of q for τ>1 as (the detailed calculations are in appendix B):
⟨ q(N,r)⟩^RN_τ>1=q_1(1-2r/N)^(τ-1)(N-1),
where q_1 := ⟨ q(N,r)⟩^RN_τ=1. Fig.<ref> shows that Eq.<ref> is consistent with numerical simulations. As for the τ=1 case, the overlap for τ>1 decreases rapidly with noise. Moreover, it is straightforward to show that, when fixing τ and r, the behavior of Eq.<ref> with N is the same as that of Eq.<ref>. At the same time, when N and r are fixed, Eq.<ref> decreases exponentially with the thinking time, asymptotically reaching 0 for τ→∞. Notice also that all the above results on the RN process do not depend on the average degree k (provided we neglect any finite-size effect). This was expected since, with the RN heuristic, the observer uses information from only one neighbor of the target node, regardless of its degree.
These findings show that the performance of the RN process not only decreases with the number of information sources but, unlike the BP and MR heuristics, also decreases with the time it spends for the observer to form opinions, regardless of how small the noise is. In short, the RN process, although it requires the least cognitive effort, is not a good strategy for forming opinions about a world of interconnected sources of information. Moreover, if the observer spends more time trying to infer the correct opinions with the RN rule, the performance is even worse.
§.§ Performance comparison
From the findings above, it is evident that, at the equilibrium, the RN process is the least efficient. Indeed, for t→∞, the overlap is 1/2 regardless of the noise level and the source network size. In the equilibrium BP and MR processes, however, q can be larger than 1/2. In particular, in the large N limit, there is a critical noise value below which q>1/2. Comparing the critical noise of the BP and the MR process, Eqs.<ref> and <ref>, we observe that the former is always larger than the latter. It implies that when opinion formation with the MR strategy ceases to be informative, the BP strategy can still extract knowledge about the ground truth network. Said otherwise: the BP process is more tolerant to noise than the MR process.
Then, from numerical simulations, we find that the equilibrium overlap of the BP process is larger than that of the MP process even when r<r_c, for any k and N (see Fig.<ref>). These results are the consequence that in the BP heuristic the observer weighs the information from the neighbors of each node and harnesses the noise awareness to its advantage, as already mentioned. From these considerations, it is natural that the performance of the BP process at equilibrium is better than that of the MR process.
Furthermore, while it is obvious that both processes at equilibrium improve their performance as k increases (as they can collect more information from the surroundings of each node), interestingly, the overlap for t →∞ of both processes (with k fixed) decreases with N, until their convergence values. When the number of information sources is larger, the evaluation errors have more space to propagate and amplify the difference between the observer's opinions and the ground truth. This effect is most pronounced around the critical noise, where correlations between opinions in finite systems are most relevant. In light of these findings, we conjecture that, in random sources networks, any local heuristic rule makes individuals' opinions fragile to the number of information sources they must deal with.
However, when opinions are the result of the aggregation of information from multiple sources, the decrease in performance slows down for larger N until it stops when the phase transition in r_c can actually be observed. In fact, note that for finite N the equilibrium overlap q for both the BP and MR cases is a continuous and derivable function and that the calculations to obtain r_c were performed in the so-called thermodynamic limit where the finite size effects can be neglected. The point about phase transitions being a property of infinite systems is a general clue from statistical physics <cit.>.
Let us now study the more realistic case where the observer spends a limited time navigating the network of information sources. As shown in section <ref>, the RN overlap is strongly sensitive to noise and decreases exponentially with N up to the limiting value of 1/2 for N→∞. Fig.<ref>a compares the results for different τ in the BP and MR cases, showing that their overlaps decrease less dramatically with noise. Note that, for small τ, the BP and MR processes have similar performances; instead, with large τ, The BP strategy is significantly better than the MR one. This means that the benefit of spending time trying to form the correct opinions is larger in the case of the BP process. This is even more evident in Fig.<ref>b which shows the overlap as a function of τ for the BP and MR processes.
Remember that, for the RN process, the performance decreases exponentially with time up to the limit value of 1/2 in its steady state, regardless of r. The reason for this decrease stems from the opinion formation rule that relies on a randomly chosen neighbor of the target node. Indeed, the noise in the sources network accumulates with the opinion updating, amplifying the difference between the observer's opinions and the ground truth.
Notably, this phenomenon is reversed for the BP and MR processes: as the thinking time increases, the overlap in both processes increases to its equilibrium value. This happens since both strategies aggregate information from all neighbors of a target node, with the effect of correcting errors instead of accumulating them. Again, from Fig.<ref>, we see that the BP performance improvement with the thinking time is much faster than that of the MR process. Indeed, in the BP process, the observer's opinions are probabilistic and allow uncertainties. As already mentioned, this results in a more careful aggregation of information which, as the thinking time increases, implies a higher capacity to identify reliable sources of information. In general, the more the opinion formation strategy is cognitively involved, the more it pays to think long and hard; conversely, lazier heuristic rules require more instinctive decisions. Finally, for the same reasons discussed in the previous section, the performance of both processes increases with k and decreases with N, independently of τ. Since we do not have an analytic solution away from the equilibrium, whether the BP and MR overlaps converge to a finite value when the network size becomes infinite remains an open question.
§ CONCLUSION
To summarize, we have reformulated the mechanism of the well-known belief propagation algorithm in the context of opinion formation. We proposed a probabilistic and local heuristic rule by which an individual extracts information from a system of interconnected sources to identify which among them are credible and which are not. Based on recent empirical evidence on the functioning of human reasoning, we claim that our model can realistically describe one of the mechanisms underlying the individuals' spontaneous opinion formation process. Thus, the next step would be to gauge it with real data and experiments. In addition, we have shown that our approach performs better than existing local heuristic rules in the literature: It is less sensitive to both structural noise and the size of the network of information sources; it also benefits more from the thinking time that individuals spend forming their opinions. However, we have shown that the efficiency of any local heuristic decreases as the number of information sources increases and that a phase transition is observed at equilibrium: even with our belief propagation approach, above a critical level of noise the individual's opinions are random.
These findings, although less pessimistic than the conclusions in <cit.>, still hint that the reliability of the opinions of individuals with limited computation capabilities can be easily compromised, paving the way for the spread of misinformation. However, we provide a realistic framework that shows what are the conditions under which this is most likely. For this reason, we believe that our model is a useful tool for studying strategies to help people become properly informed.
At the same time, our work is strongly related and contributes to the literature on human learning with bounded rationality (see, for example, <cit.>).
Our model can be extended in many directions.
First, note that we only addressed the case of a random source network. On the one hand, this has many advantages since, in random networks, it can be proven that the equilibrium belief propagation algorithm is a good approximation of the optimal Bayesian computation <cit.>. However, in the real world we observe different types of networks, and therefore, in future research, it may be insightful to study how the performance of our belief propagation process varies in more complex network topologies.
Second, for the sake of simplicity, we assumed that the noise, and thus the compatibility matrices, are the same for every link in the network. However, in the real world, different sources have different relationships, and thus the structural noise is also not the same. Hence, future models could investigate how different noise distributions among links in the source network affect the outcomes of opinion formation heuristic rules. Moreover, a limitation of our model over existing ones is that we must rely on the somehow strong assumption that the observer knows the true magnitude of the noise. However, Bayesian estimates of the human brain are imperfect, and this could be encoded in our model by defining another noise that affects the observer's estimate of the original noise. In this way, the compatibility matrices would be stochastic quantities. Another possibility to relax the assumption of perfect noise knowledge would be to design models by which the observer can “learn” and infer the noise in the network before the opinion formation process begins.
Finally, note that this paper investigates only one aspect of opinion formation, i.e., solitary opinion formation without social interaction. In contrast, previous literature has mostly focused on the social aspect of opinion propagation. Thus, the natural next step is to combine our heuristic rule with social influence.
§.§.§ Acknowledgements
We thank M. Medo and Y.C. Zhang who helped to improve this paper by their comments.
§.§.§ Author contributions
E.M.F. and A.L.C. contributed equally to this work.
§.§.§ Competing Interests statement
The authors declare no competing interests.
equationsection
§ DERIVATION OF THE MR TIPPING NOISE
Here, we derive Eq.<ref>. Let us denote by P_i(t) the probability that, with the MR heuristic, the observer has the correct opinion on node i at time t, i.e., o_i(t)=(1,0) (under the gauge transformation). When the source network size N, the average degree k, and time t are large enough, we can neglect correlations and assume P_i(t)=P_j(t)≡ P(t) for each i and j. In this way, we can write
P(t+1)= P(t)N-1/N+ 1/N∑_i=⌊k/2⌋+1^k ki W_t^i (1-W_t)^k-i.
The first term on the right-hand side is the probability that the observer, at time t, has the correct opinion on the target node and that the latter is not chosen by the observer at time t+1 (recall that, for t>N-1, the observer chooses, at each time step, a random node). The second term is the probability that the target node is chosen at time t+1 and that the observer forms the correct opinion. Here, W_t is the probability that the vote v of a node toward the considered node is positive (at time t), i.e., W_t ≡ (1-2r)P(t)+r. Note that in Eq.<ref> we have implicitly made two further approximations: first, we have again assumed that the number of neighbors of each node exactly matches the average degree k; second, we have neglected the probability that v=0 (possible only when k is even), i.e., we did not include in the sum the term 1/2kk/2[W(1-W)]^k/2 (however, in the critical noise calculation shown below, this term would not affect the result).
Now, at equilibrium we have P(t+1)=P(t)≡ P and W_t≡ W. Thus, Eq.<ref> becomes
P= ∑_i=⌊k/2⌋+1^k ki W^i (1-W)^k-i.
We observe that when r=1/2 the only possible solution to this equation is P=1/2; on the other hand, when r=0, there are three possible solutions, i.e., P=± 1 and P=1/2. Therefore, we expect that there exists a critical noise r=r_c such that, for r<r_c and a positive seed node, P>1/2 and thus q>0 for τ→∞. Indeed, expanding to first order Eq.<ref> around P=1/2, and solving the equation for r, we retreive Eq.<ref>.
§ DERIVATION OF THE RN OVERLAP
Here, we derive Eq.<ref>. Let us define the probability P(x,t) that, at time t, the number of correct opinions with the RN heuristic (and thus corresponding to (1,0) due to the gauge transformation) is x. At each time step, x can increase or decrease by one, or remain the same. Thus, the master equation of the RN process has the form
P(x,t+1)=P(x-1,t)W(x-1→ x)
+P(x+1,t)W(x+1→ x)+P(x,t)W(x→ x),
where W(x→ y) is the transition probability that the number of correct opinions becomes y from x. To increase x by one, the observer must pass over a node with an incorrect opinion and reevaluate it correctly. The probability that this occurs corresponds to W(x-1 → x), which, for a given r, is given by
W(x-1 → x)=N-x+1/N[x-1/N-1(1-r)+N-x/N-1r ].
Here, the first factor is the probability of choosing a node with an incorrect opinion. The second factor is the probability that the opinion on that node is reevaluated correctly. Similarly, we can write
W(x+1 → x)=x+1/N[x/N-1r+N-1-x/N-1(1-r) ].
The transition probability W(x→ x), instead, is given by the probability that the observer does not change his/her past opinion on a node. This is
W(x → x)=x/N[x-1/N-1(1-r)+N-x/N-1r]
+ N-x/N[x/N-1r+N-1-x/N-1(1-r)].
Now, by multiplying both sides of Eq.<ref> by x, summing over all possible x, and considering sufficiently large N, we can rearrange Eq.<ref> to obtain
⟨ x(t+1)⟩=⟨ x(t)⟩(1-2r/N)+r.
Since the relation between x and q is Nq=2x-1, Eq.<ref> is a recursive equation that must be solved with the initial condition
⟨ x(N-1)⟩=Nq_1-1/2,
where q_1 := ⟨ q(N,r)⟩^RN_τ=1.
Finally, from the solution of Eq.<ref>, we can write the expected value of q for τ>1 as in Eq.<ref>.
Notice that this result better reproduces the numerical simulations when the information source network is not sparse (i.e. when k is sufficiently large) so that the proportion of correct opinions in the transition probabilities of Eq.<ref>, <ref>, and <ref> is better approximated.
unsrt
|
http://arxiv.org/abs/2307.00953v1
|
20230703115223
|
Slow-fast systems with an equilibrium near the folded slow manifold
|
[
"Natalia G. Gelfreikh",
"Alexey V. Ivanov"
] |
math.DS
|
[
"math.DS",
"37C55, 37D25, 37B55, 37C60"
] |
HODINet: High-Order Discrepant Interaction Network for RGB-D Salient Object Detection
Kang Yi, Jing Xu, Xiao Jin, Fu Guo and Yan-Feng Wu
Manuscript received 14 June 2022. This work was supported in part by Tianjin Natural Science Foundation, China (22JCQNJC01650, 19JCQNJC00300), and Fundamental Research Funds for the Central Universities of Nankai University (63201192, 63211116). (Kang Yi and Jing Xu contributed equally to this work.) (Corresponding author: Xiao Jin.)
Yi Kang, Jing Xu, Xiao Jin and Yan-Feng Wu are with the College of Artificial Intelligence, Nankai University, Tianjin 300350, China (e-mail: [email protected]).
Guo Fu is with Tandon School of Engineering, The New York University, New York 11201, USA.
August 1, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
We study a slow-fast system with two slow and one fast variables.
We assume that the slow manifold of the system possesses a fold and there is an equilibrium of the system
in a small neighbourhood of the fold. We derive a normal form for the system
in a neighbourhood of the pair "equilibrium-fold"
and study the dynamics of the normal form. In particular, as the ratio of two time scales tends to zero we obtain an asymptotic formula for the Poincaré map
and calculate the parameter values for the first period-doubling bifurcation. The theory is applied to a generalization
of the FitzHugh-Nagumo system.
Keywords: slow-fast systems, period-doubling bifurcation
MSC 2010: 37C55, 37D25, 37B55, 37C60
§ INTRODUCTION
Since fundamental works of A. N. Tikhonov, L. S. Pontryagin, N. Fenichel <cit.>, <cit.>, <cit.> singular perturbed dynamical systems became a subject of many researches and are intensively studied nowadays. These systems have many applications in various areas of physics: mechanics, hydrodynamics, plasma physics, neurobiology and others. They are aimed to describe those systems for which different processes are running with different speeds. Mathematically, such systems can often be written in the form of so-called slow-fast system
εẋ =F(x,y) ,
ẏ = G(x,y),
where x∈^n is a fast variable and y∈^m is a slow one. The parameter ε describes the ratio of two time scales and is assumed to be small.
In the limit ε = 0 we obtain the slow system
0 =F(x,y) ,
ẏ = G(x,y),
which describes the motion in a vicinity of the slow manifold
defined by the equality F(x, y) = 0. On the other hand, using the fast time s = t/ε
we can rewrite the system (<ref>) as
x' = F(x,y) ,
y' = ε G(x,y),
where the prime stands for the derivative with respect to s.
Setting ε = 0 in (<ref>), we obtain the fast system
x' =F(x,y),
y' = 0.
The fast system approximates the original system on any finite interval with respect to s
due to smooth dependence of solutions on a vectorfield. It should be noted that
for ε≠ 0 the systems (<ref>)
and (<ref>) are equivalent,
however, the limit systems (<ref>) and (<ref>) are different.
The standard tool for studying slow-fast systems
is based on the geometric singular perturbation theory (GSPT) <cit.>.
Using the notion of normal hyperbolicity,
GSPT predicts that a trajectory attracted by a stable branch of the slow manifold follows closely a trajectory of (<ref>)
till this trajectory hits a singularity of the slow manifold.
Singularities of slow manifolds cause a variety of phenomena
including delay of stability loss and canard explosion
(see e.g. <cit.>, <cit.>, <cit.>, <cit.>).
The present paper was inspired by <cit.>, where the author studied a FitzHugh-Nagumo-like sysytem originated from the mathematical theory of neural cells. This system consists of three ODEs with one fast variable corresponding to the membrane potential and two slow gating variables:
εẋ = x-x^3/3-y-z ,
ẏ = a+x ,
ż = a+x-z,
where ε is a small parameter and a is a real parameter.
The slow manifold of the system is described by the equation x-x^3/3-y-z = 0 and possesses folds at x = ± 1, y+z = ± 2/3.
The system has a unique equilibrium which is
close to the fold if a is close to one.
The equilibrium is stable for larger values of a
and undergoes a supercritical Andronov-Hopf bifurcation at a_H = 1-1/4ε + O(ε^2) (see <cit.> for details) .
In <cit.> M. Zaks found numerically that the initial periodic orbit may lose stability via a sequence of period-doubling bifurcations. Studying numerically the period-doubling cascades for small but fixed values of the parameter ε, M. Zaks observed that the cascade follows the Feigenbaum law with the Feigenbaum constant 4.67… typical for dissipative systems. On the other hand
for smaller values of ε the process switches to the Feigenbaum constant of a conservative map as, in the limit ε→ 0,
two-dimensional Poincaré map nearly preserves the area. The reason for such phenomenon was assumed to be in the closeness of the equilibrium to a fold of the slow manifold.
In the present paper we consider a family of slow-fast systems having one fast and two slow variables and depending on a real parameter δ:
εẋ = F(x,y,z;δ ) ,
ẏ = G_1(x,y,z;δ ) ,
ż = G_2(x,y,z;δ ) ,
where (x,y,z)∈ℝ^3, functions F, G_1 and G_2 are smooth functions, ε and δ are small parameters.
We suppose that the system (<ref>) possesses an equilibrium (x,y,z) = (x_0(δ ), y_0(δ ), z_0(δ ) )
and if δ =0 the equilibrium lies on the fold of the slow manifold.
Shifting the origin into the equilibrium, we have the following conditions:
F(0,0,0;δ )=0, G_1(0,0,0;δ )=0, G_2(0,0,0;δ )=0.
The slow manifold is defined by the equality
F(x,y,z;δ ) = 0.
We assume that for δ =0 the slow manifold possesses a non-degenerate fold which is tangent to a fast fibre.
More precisely,
we impose the following conditions (see e.g. <cit.>):
F'_x(0,0,0;0)=0, ∇ F(0,0,0;0)≠ 0, F”_x^2(0,0,0;0) 0.
Finally, we impose a condition
F'_y (0,0,0; 0) G'_1x(0,0,0; 0) +F'_z (0,0,0; 0) G'_2x (0,0,0; 0) < 0,
which ensures the linear stability of the equilibrium in the following sence. Consider the equations (<ref>) linearized at the point (0, 0, 0, 0). Then its characteristic equation reads
-λ^3+(G'_1y+G'_2z)λ^2 + (ε^-1(F'_yG'_1x + F'_zG'_2x) - (G'_1yG'_2z-G'_1zG'_2y))λ -
- ε^-1(F'_y(G'_1xG'_2z-G'_1zG'_2x) - F'_z(G'_1xG'_2y-G'_1yG'_2x)) = 0,
where all the derivatives are evaluated at the point (0,0,0,0). Substituting Λ = √(ε)λ, we obtain
-Λ(Λ^2 - (F'_yG'_1x + F'_zG'_2x)) = O(√(ε)).
By setting ε=0 in (<ref>) one arrives at the limit equation
-Λ(Λ^2 - (F'_yG'_1x + F'_zG'_2x)) = 0,
which has three roots Λ_0 = 0, Λ_± = ±√(F'_yG'_1x + F'_zG'_2x). Then the condition (<ref>) guarantees that all these roots have non-positive (in fact, zero) real parts.
Under these assumptions we derive a normal form
for the system
(<ref>)
in a neighbourhood of the pair "fold-equilibrium":
ξ' = ξ^2-η +
μ (γ_0 ξ +γξ^3) +
μ^2 g_1(ξ, η, ζ) ,
η' = 2ξ + μ (α_1 η + α_2 ζ )
+ μ^2 g_2(ξ, η, ζ) ,
ζ' = μ ( β_1 η + β_2 ζ )
+ μ^2 g_3(ξ, η, ζ),
where μ = √(ε) is a new small parameter, γ_0, γ, α_1,2, β_1,2 are constants and functions g_k, k=1,2,3 are polynomials of (ξ, η, ζ), which will be specified later. The parameter γ_0 has a special role: it describes the closeness of the equilibrium to the fold. Using the Poincaré map technique we study the dynamics of the normal form. In particular, we show that in a μlnμ-neighborhood of the equilibrium the normal form system has a periodic trajectory. Varying distance between the equilibrium and the fold one may observe how this thrajectory undergoes the period-doubling bifurcation. We obtain conditions on parameters of the normal form which correspond to this scenario.
The paper is organized in the following way.
In the second section we derive the normal form.
The asymptotics for the Poincaré map
associated with the normal form are obtained in Section 3.
Section 4 is devoted to a construction of asymptotic
conditions for existence of a periodic orbit and its period-doubling bifurcations. The main result of the paper is the asymptotic formula (<ref>). Finally, in Section 5 we apply the obtained results to the FitzHugh-Nagumo system and
compare them with numeric data.
§ NORMAL FORM
In this section we derive the formal normal form for the system (<ref>) when the right-hand side satisfies the assumptions (<ref>), (<ref>) and (<ref>).
We construct a sequence of changes of space variables and time and apply them to (<ref>).
First, we introduce a new small parameter μ:
ε =μ^2
and make a rescaling of the space variables, time and the second parameter δ:
x=μ X_1, y= μ^2 Y_1,
z = μ^2 Z_1, t = μ s, δ = μ^2σ .
Then
εẋ=μ^2 X'_1 ,ẏ=μ Y'_1 ,ż = μ Z'_1 ,
where the prime stands for the derivative with respect to the "semi-fast" time s.
We substitute (<ref>) into (<ref>) and use the Taylor formula for the right hand side of (<ref>) in a neighborhood of the point (x,y,z;δ ) = (0,0,0;0). Then, taking into account (<ref>) and (<ref>), one obtains
X'_1 = F'_y Y_1 +F'_z Z_1 + 1/2 F”_x^2 X_1^2 +
+μ[ σ F”_xδ X_1 +
F”_xy X_1Y_1 + F”_xz X_1Z_1 +1/6 F”'_x^3 X_1^3
] +
+ μ^2 [σ F”_yδY_1 + σ F”_zδ Z_1 + 1/2 F”_y^2Y_1^2 +1/2 F”_z^2 Z_1^2 +F”_yz Y_1Z_1. +
+ .
X_1^2 ( 1/2σ F”'_x^2δ +
1/2 F”'_x^2yY_1 + 1/2 F”'_x^2zZ_1) + 1/24 F^(4)_x^4 X_1^4
] +O(μ^3),
Y'_1 = G'_1x X_1 + μ[ G'_1yY_1 + G'_1zZ_1 + 1/2 G”_1x^2 X_1^2 ] +
+μ^2
[ X_1( σ G”_1xδ +
G”_1xyY_1 + G”_1xz Z_1) + 1/6 G”'_1x^3X_1^3 ] +O(μ^3) ,
Z'_1 = G'_2x X_1 + μ[ G'_2yY_1 + G'_2zZ_1 + 1/2 G”_2x^2 X_1^2 ] +
+μ^2
[ X_1( σ G”_2xδ +
G”_2xyY_1 + G”_2xz Z_1) + 1/6 G”'_2x^3X_1^3 ] +O(μ^3) .
Here and below in this section all derivatives of the functions F, G_1 and G_2 are evaluated at the point (x,y,z; δ )=(0,0,0;0).
For μ=0 the system takes the form
X'_1 = F'_y Y_1 +F'_z Z_1 + 1/2 F”_x^2 X_1^2 ,
Y'_1 = G'_1x X_1 ,
Z'_1 = G'_2x X_1 .
and the multipliers of the corresponding linearized system satisfy an equation
- λ^3 + λ (F'_y G_1x' + F'_z G_2x') =0.
As it was mentioned above, in this paper we consider the case of stable equilibrium. Then, denoting by
D=F'_y G_1x' + F'_z G_2x',
the condition (<ref>) takes the form
D<0.
One may note that for μ=0 the system (<ref>) has an obvious integral:
- G_2x' Y_1 + G_1x' Z_1.
Taking this into account, we introduce new variables (X_2, Y_2, Z_2) by
X_2 = X_1, Y_2 = F'_y Y_1+F'_z Z_1,
Z_2 = - G_2x' Y_1 + G_1x' Z_1.
Due to (<ref>) the inverse change of variables is well-defined:
X_1 = X_2, Y_1 = 1/D ( G_1x' Y_2 - F'_z Z_2 ), Z_1 = 1/D (G_2x' Y_2 + F'_y Z_2)
and the system (<ref>) can be rewritten as
X'_2 = Y_2 + 1/2 F”_x^2 X_2^2 +μ{ X_2[ γ_0^(2) + γ_1^(2) Y_2
+ γ_2^(2) Z_2] + γ_3^(2) X_2^3 }+
μ^2 {γ_4^(2) Y_2
+ γ_5^(2) Z_2 +
γ_6^(2) Y_2^2
+ γ_7^(2) Z_2^2
+ γ_8^(2) Y_2 Z_2 +
X_2^2 [ γ_9^(2) + γ_10^(2) Y_2
+ γ_11^(2) Z_2] + γ_12^(2) X_2^4 }
+O(μ^3),
Y'_2 = D X_2 +
μ( α_1^(2) Y_2
+ α_2^(2) Z_2
+ α_3^(2) X_2^2 ) +
μ^2 {
X_2[ α_4^(2) + α_5^(2) Y_2 + α_6^(2) Z_2]
+ α_7^(2) X_2^3 }
+O(μ^3) ,
Z'_2 = μ[ β_1^(2) Y_2 +
β_2^(2)
Z_2 + β_3^(2) X_2^2 ] + μ^2 { X_2[ β_4^(2) + β_5^(2) Y_2 +
β_6^(2) Z_2] + β_7^(2) X_2^3 } +O(μ^3) ,
where
γ_0^(2) = σ F”_xδ , γ_1^(2) = 1/D( F”_xy G'_1x
+ F”_xz G'_2x) , γ_2^(2) =
1/D(F”_xz F'_y - F”_xy F'_z ), γ_3^(2) = 1/6 F”'_x^3 ,
α_1^(2) = 1/D (F'_yG'_1xG'_1y + F'_y G'_1z G'_2x + F'_zG'_1x G'_2y + F'_z G'_2x G'_2z ),
α_2^(2) = 1/D (F'^2_y G'_1z - F'^2_z G'_2y
+F'_yF'_z G'_2z - F'_yF'_z G'_1y ),
α_3^(2) = 1/2 (F'_y G”_1x^2 + F'_z G”_2x^2),
β_1^(2) = 1/D (G'^2_1x G'_2y
- G'_1x G'_2x G'_1y + G'_1x G'_2x G'_2z
- G'^2_2x G'_1z),
β_2^(2) = 1/D ( - F'_zG'_1xG'_2y
+ F'_zG'_2xG'_1y + F'_y G'_1xG'_2z - F'_y G'_2x G'_1z),
β_3^(2) = 1/2 (G'_1xG”_2x^2 - G'_2x G”_1x^2).
We do not write formulae for the coefficients of terms of the order μ^2 (i.e. γ^(2)_i, α^(2)_i, β^(2)_i with i ≥ 4) since they do not enter the main result of the paper.
In the system (<ref>) the variable Z_2 is slow and the leading term of the right-hand side in the first equation does not contain Z_2.
The next change of variables is aimed at simplification of the leading term of the right hand side in (<ref>). Using scaling
X_2=k X_3, Y_2=m Y_3, Z_2=Z_3, s =n τ
we represent (<ref>) as
X'_3 = nm/kY_3+1/2 F”_ x^2 kn X_3^2+O(μ),
Y'_3 = D kn/m X_3+O(μ) ,
Z'_3 = O(μ) .
We fix the factors m, k, n in the following way
m = D/F”_x^2, k = √(-2D)/F”_x^2 , n = √(2/-D) .
Then the leading term of (<ref>) is simplified to
X'_3 = X_3^2-Y_3 ,
Y'_3 = 2X_3 ,
Z'_3 = 0 .
Here the prime stands for the derivative with respect to τ.
And the whole system (<ref>) takes the form
X'_3 = X_3^2-Y_3
+μ( X_3(γ_0^(3) + γ_1^(3)Y_3+γ_2^(3)Z_3)+γ_3^(3)X_3^3) +
+ μ^2 {γ_4^(3) Y_3
+ γ_5^(3) Z_3 +
γ_6^(3) Y_3^2
+ γ_7^(3) Z_3^2
+ γ_8^(3) Y_3 Z_3 +
+ X_3^2 [ γ_9^(3) + γ_10^(3) Y_3
+ γ_11^(3) Z_3] + γ_12^(3)X_3^4 }+
O(μ^3) ,
Y'_3 = 2X_3+μ (α_1^(3)Y_3+ α_2^(3) Z_3+ α_3^(3) X_3^2 ) +
+ μ^2 ( X_3( α_4^(3) + α_5^(3) Y_3+ α_6^(3)Z_3) + α_7^(3)X_3^3 )+O(μ^3) ,
Z'_3 = μ( β_1^(3) Y_3+ β_2^(3) Z_3 + β_3^(3) X_3^2 )
+
+ μ^2 (X_3( β_4^(3) + β_5^(3) Y_3+ β_6^(3) Z_3) + β_7^(3) X_3^3 )+O(μ^3) ,
where
γ_0^(3) = √(2/-D) γ_0^(2) , γ_1^(3)= -√(-2D)/F”_x^2 γ_1^(2), γ_2^(3)= √(2/-D) γ_2^(2), γ_3^(3)= 2√(-2D)/F”^2_x^2 γ_3^(2) ,
α_1^(3) = √(2/-D) α_1^(2) , α_2^(3) = √(2/-D) F”_x^2/D α_2^(2) , α_3^(3) = -2√(2)/F”_x^2√(-D) α_3^(2) ,
β_1^(3) = -√(-2D)/F”_x^2 β_1^(2), β_2^(3) = √(2/-D) β_2^(2), β_3^(3) = 2√(-2D)/F”^2_x^2 β_3^(2).
The final change of variables is aimed at excluding as many coefficients α_i, β_i, γ_i as possible. One may remark that equations (<ref>) (and in fact already (<ref>) due to scaling (<ref>)) possess a symmetry. Namely, they are invariant with respect to the following transformation
(X_3,τ,μ) ↦ (-X_3, -τ, -μ)
This symmetry will be used to simplify the study of the Poincaré map. Thus, our purpose is not only to exclude a number of coefficients and keep the leading term, but also to preserve this symmetry.
For this reason we consider the following close-to-identity change of variables
(
[ X_3; Y_3; Z_3 ])=(I+μ A+μ^2 B)(
[ ξ; η; ζ ]) ,
where
A=([ 0 B_1 C_1; A_2 0 0; A_3 0 0 ])
, B=([ a_1 0 0; 0 b_2 c_2; 0 b_3 c_3 ])
.
Inverting of (<ref>) yields
([ ξ; η; ζ ])=(I-μ A+μ^2 (A^2- B))([ X_3; Y_3; Z_3 ])+O(μ^3) .
In order to simplify the terms of the first order, we choose
B_1 = -1/2γ_1^(3), C_1 = - 1/2γ_2^(3), A_2 = α_3^(3) , A_3 = β_3^(3) .
Then by appropriate choice of a_1, b_2, b_3 and c_2 we remove four coefficients of the order μ^2 and
obtain the following system:
ξ' = ξ^2-η +
μ f_1+
μ^2 g_1 +O( μ^3) ,
η' = 2ξ + μ f_2
+ μ^2 g_2 +O( μ^3) ,
ζ' = μ f_3
+ μ^2 g_3 +O( μ^3) ,
where
f_1 = γ_0 ξ +γξ^3 ,
g_1 = γ_1 η + γ_2 η^2+ γ_3 ζ^2
+ γ_4ηζ + ξ^2 ( γ_5η+ γ_6ζ ) + γ_7ξ^4,
f_2 = α_1 η + α_2 ζ ,
g_2 = ξ ( α_3η + α_4ζ ) + α_5 ξ^3 ,
f_3 = β_1 η + β_2 ζ ,
g_3 = ξ ( β_3 η + β_4 ζ ) +β_5 ξ^3
and
γ_0 = γ_0^(3) +γ_1^(3)-α_3^(3) , γ = γ_3^(3),
α_1 = α_1^(3) + α_3^(3) - γ_1^(3), α_2 = α_2^(3) -γ_2^(3), β_1 = β_1^(3) + β_3^(3), β_2 = β_2^(3).
We omit terms of the order O(μ^3) and obtain the following system:
ξ' = ξ^2-η +
μ f_1+
μ^2 g_1 ,
η' = 2ξ + μ f_2
+ μ^2 g_2 ,
ζ' = μ f_3
+ μ^2 g_3
with f_i and g_i defined by (<ref>).
The system (<ref>) will be called the normal form in a neighborhood of a pair "equilibrium-fold". One should emphasize that equations (<ref>) are invariant under the following transformation
(ξ, τ, μ) ↦ (-ξ, -τ, -μ).
§ DYNAMICS OF THE NORMAL FORM
§.§ Poincaré map
Setting μ =0 in (<ref>) (that corresponds to ε =0 for the original system), we obtain the system
ξ' = ξ^2-η,
η' = 2ξ ,
ζ' = 0 .
In addition to the obvious integral of motion ζ, the unperturbed system has the second integral
J=(η + 1 - ξ^2)e^-(η+1).
The orbits of the system (<ref>) belong to
intersections of the integrals' level sets (see fig.<ref>)
ζ = ζ_0, (η + 1 - ξ^2)e^-(η+1) = J_0.
Note that all fixed points of (<ref>) belong to
the line (0, 0, ζ). Moreover, the unperturbed system possesses a separatrix.
Namely, the parabola η = ξ^2-1 (that corresponds to J_0=0) separates the plane (ξ,η) into two parts:
above the parabola (J_0>0) all orbits of (<ref>) are closed and below it (J_0<0) all orbits are not closed.
To study the dynamics of (<ref>) we construct the Poincaré map for this system. The normal form system can be considered as a small perturbation of (<ref>).
We add to the system (<ref>) an additional equation in order to describe evolution of the variable J. Using (<ref>) and (<ref>), we obtain the following equation on J:
J' = μ f_4 e^-(η+1) + μ^2 g_4 e^-(η+1),
where
f_4 = (ξ^2-η)f_2 - 2ξ f_1,
g_4 = (ξ^2-η)g_2 - 2ξ g_1.
Then we introduce the Poincaré section S_-:
S_- = {( ξ, η, ζ) : ξ =0, -1<η<0 , ζ∈ℝ }.
We use the variables (ζ, J) as coordinates on this section and denote the first return map by
P(μ): S_-→ S_-.
Since all trajectories of the unperturbed system are closed for J>0,
the unperturbed Poincaré map P(0) coincides with the identity map.
It is natural to expect that after a perturbation a trajectory starting at S_- will hit this section again near the initial point.
Additionally one may expect that eigenvalues of the tangent map TP(μ)
should be close to one. However, this conjecture may become false if
the trajectory is located near the separatrix where the first return time grows substantially.
To describe this effect we will concentrate our attention on the region
near the separatrix where J≪ 1.
Assuming J_0 = O(μ ), we consider a closed orbit of the unperturbed system (<ref>), defined by ζ = ζ_0, (η + 1 - ξ^2)e^-(η+1)=J_0, and denote the orbit by 𝒪(ζ_0, J_0). We also introduce a
μlnμ-neighborhood of the orbit, U_μ(𝒪(ζ_0, J_0)). Finally, we denote by 𝒪_μ(ζ_0, J_0) the orbit of the system (<ref>), which contains the point ℳ^(0) for which
ξ =0, ζ =ζ_0, J= J_0 .
We suppose that 𝒪_μ(ζ_0, J_0) ⊂ U_μ(𝒪(ζ_0, J_0)).
One may note that a trajectory corresponding to the unperturbed orbit 𝒪(ζ_0, J_0) possesses different behaviour in different regions.
It starts at the point ℳ^(0)
and moves initially near the separatrix J=0 till
it comes close to a turning point
ξ_+=(k^-1-1)^1/2, η_+=k^-1-1, ζ_+=ζ_0,
where
k = 1/lnJ_0^-1,
k = O ( 1/lnμ^-1) .
Then the trajectory "detaches" from the separatix,
turns in the ξ-direction and then "flies" across the region between two branches of the separatrix.
At the top of the orbit
ξ_t =0, η_t = k^-1 +ln k^-1 -1 +o(1), ζ_t = ζ_0.
Then the trajectory approaches
the second turning point which is located symmetrically at ξ_-=-(k^-1-1)^1/2, η_-=k^-1-1, ζ_-=ζ_0, where the direction of motion with respect to
ξ is changed again and finally the trajectory follows
the separtrix till an intersection with S_-.
According to this description we highlight in the neighborhood U_μ(𝒪(ζ_0, J_0)) the following overlapping domains:
𝒟_1 = {(ξ, η, ζ)∈ U_μ(𝒪(ζ_0, J_0)): |ξ|≪ξ_+, η<η_+},
𝒟_2^± = {(ξ, η, ζ)∈ U_μ(𝒪(ζ_0, J_0)): |ξ - ξ_±|≪ 1, |η - η_±|≪ k^-1},
𝒟_3 = {(ξ, η, ζ)∈ U_μ(𝒪(ζ_0, J_0)): |ξ|≪ξ_+, η>η_+}.
Taking into account the symmetry of the system, we define an auxiliary Poincaré section
S_+= {(ξ, η, ζ)∈ U_μ(𝒪(ζ_0, J_0)): ξ=0, η > η_+ , ζ∈ℝ}
and introduce an auxiliary Poincaré map
F(μ) : S_-→ S_+.
We begin with considering the system (<ref>), (<ref>) in the domain 𝒟_1 with initial conditions:
ξ (0) = 0, ζ (0) = ζ_0, J(0)= J_0, η (0) = η_0: ( η_0 +1 ) e^-( η_0 +1 ) = J_0
and we find some point ℳ^(1) at the orbit
𝒪_μ(ζ_0, J_0) such that it belongs to
𝒟_1∩𝒟_2^+. Then we consider the system in 𝒟_2^+ with corresponding initial conditions and find a point ℳ^(2) at 𝒪_μ(ζ_0, J_0)∩𝒟_2^+∩𝒟_3. Finaly we consider the system in 𝒟_3 and find the point ℳ^(3) that belongs to 𝒪_μ(ζ_0, J_0) ∩ S_+. In this way we get F(μ ).
We also note that due to invariance of the system (<ref>) with respect to change (<ref>) the map F(-μ) corresponds to the Poincaré map between sections
S_- and S_+, but backward in time (see fig.<ref>).
Therefore the first-return map P(μ): S_-→ S_- can be represented as a composition
P(μ) = F^-1(-μ)∘ F(μ ).
§.§ Fixed points and the period-doubling bifurcation
Since all trajectories of the unperturbed system are closed for J>0 one may expect that after a small perturbation the image of a point (ζ_0, J_0)∈ S_- under the Poincaré map will be close to the initial point (ζ_0, J_0). The condition for a fixed point reads as
P(μ )(
[ ζ_0; J_0 ])
=
(
[ ζ_0; J_0 ])
or, taking into account (<ref>), it can be rewritten
F(μ)(
[ ζ_0; J_0 ])
=
F(-μ)(
[ ζ_0; J_0 ]).
Due to smooth dependence of (<ref>) on μ one may represent the map F(μ) in the form
F(μ) = id + μ F_1 + μ^2 F_2 +μ^3 F_3 + O(μ^4).
Then the symmetry of the normal form (<ref>) implies that
F_1, F_3: (
[ ζ_0; J_0 ])
↦(
[ 0; 0 ]).
The period-doubling bifurcation occurs when one of the eigenvalues of the tangent map D_(ζ_0, J_0)P at a fixed point passes through -1. Thus, the condition for the period-doubling bifurcation can be written in the form
det(D_(ζ_0, J_0)P(μ) + I) = 0
or equivalently due to (<ref>) as
det( D_(ζ_0, J_0)F(μ) +
D_(ζ_0, J_0)F(-μ) ) = 0.
Substituting (<ref>) into (<ref>), one gets
det( 2I + 2μ^2 D_(ζ_0, J_0) F_2 + O(μ^4)) = 0
or
1+μ^2 Tr D_(ζ_0, J_0) F_2 + O(μ^4) = 0.
The condition (<ref>) justifies the necessity of second order approximation
(<ref>) for the map F(μ ) to reveal the cascade of the period-doubling bifurcations.
§.§ Domain 𝒟_1
In this subsection we consider the system (<ref>) in the domain 𝒟_1. As the orbit is close to the separatrix in this domain,
we introduce new variable ϑ instead of η:
η =ϑ +ξ^2-1.
Substituting (<ref>) into (<ref>) and (<ref>), one obtains
the system for (ξ,ϑ,ζ ,J):
ξ' = 1-ϑ
+μ f_1 +
μ^2 g_1 ,
ϑ' = 2ξϑ +μ ( f_2 -2ξ f_1 )
+ μ^2 (g_2 -2ξ g_1) ,
ζ' = μ f_3
+ μ^2 g_3 ,
J' = μ f_4 e^-ϑ-ξ^2
+ μ^2 g_4 e^-ϑ-ξ^2 ,
where all functions f_i, g_i are defined in (<ref>), (<ref>) and are evaluated at the point (ξ, ϑ + ξ^2 -1,
ζ).
One may conclude that as the variable ϑ≪ 1 in 𝒟_1, the derivative of the variable ξ in (<ref>) is positive in this domain. Taking this into account, we choose ξ as a new independent variable and rewrite the system for other variables
ϑ, ζ and J as functions of ξ:
dϑ/d ξ = 2ξϑ +μ ( f_2 -2ξ f_1 )
+ μ^2 (g_2 -2ξ g_1) /1-ϑ
+μ f_1 +
μ^2 g_1
,
dζ/d ξ = μ f_3
+ μ^2 g_3/1-ϑ
+μ f_1 +
μ^2 g_1 ,
dJ /d ξ = μ f_4
+ μ^2 g_4/1-ϑ
+μ f_1 +
μ^2 g_1 e^-ϑ -ξ^2 .
We will find a solution of the system (<ref>) as a perturbation of the orbit 𝒪(ζ_0, J_0)
ϑ (ξ ) = ϑ_0(ξ ) +μϑ_1 (ξ ) +μ^2ϑ_2 (ξ ) +O(μ^3),
ζ (ξ ) = ζ_0 +μζ_1(ξ ) +μ^2ζ_2 (ξ ) +O(μ^3),
J(ξ ) = J_0+μ J_1 (ξ ) +μ^2 J_2(ξ ) +O(μ^3)
and fix initial conditions for the unknown functions in the following way:
ϑ_i (0)=0, ζ_i(0)=0,
J_i(0)=0, i=1,2.
The function ϑ_0 (ξ ) corresponds to the unperturbed orbit and is given implicitly by the following equation
ϑ_0 e^-ϑ_0=J_0 e^ξ^2.
Hence
ϑ_0(ξ )=J_0 e^ξ^2( 1 +O ( J_0 e^ξ^2)
).
Now we fix the point ℳ^(1) by setting the corresponding value of ξ =ξ^(1) as
ξ^(1) = k^-1/2(1 - 1/2kln k^-1) .
Then, taking into account (<ref>), one obtains that
(ξ^(1))^2 = k^-1-ln k^-1 +1/4 kln^2 k^-1,
e^(ξ^(1))^2 =k/J_0(1+O(kln^2 k^-1 )).
and the following estimates hold
ϑ_0 (ξ ) = J_0 e^ξ^2 +O(k^2), ϑ_0 (ξ ) = O(k).
Substituting (<ref>) into (<ref>) and expanding with respect to
μ, one obtains equations for the functions ϑ_i, ζ_i, J_i. Note that
derivatives dζ/ dξ, dJ/ dξ are of the order O(μ). This implies the function ϑ_2 does not appear in equations for ζ_2, J_2 and we need to construct the solution ϑ (ξ) only up to terms of the order O(μ).
The function ϑ_1 (ξ) satisfies an equation:
dϑ_1/d ξ = 2ξϑ_1/(1-ϑ_0)^2
+ ℛ_1^(ϑ )(ξ),
where
ℛ_1^(ϑ )(ξ) =
(1-ϑ_0)f_2 - 2ξ f_1/(1-ϑ_0)^2 ,
and the functions f_1, f_2 are evaluated at the point
(ξ, ϑ_0 +ξ^2 -1, ζ_0).
Taking into account (<ref>), the solution of the equation on ϑ_1(ξ ) has the form
ϑ_1 (ξ ) = e^∫_0^ξ2s ds/(1-ϑ_0(s))^2∫_0^ξ e^-∫_0^s 2p dp/(1-ϑ_0(p))^2·ℛ_1^(ϑ )(s)ds .
From (<ref>) we get
∫_0^s 2p dp/(1-ϑ_0(p))^2
= ∫_0^s 2p( 1 +O(ϑ_0(p))) dp
= s^2 + O(k).
Then, using (<ref>) and (<ref>), one obtains
ϑ_1 (ξ ) = e^ξ^2∫_0^ξ e^-s^2[ -2γ s^4 +(α_1 -2γ_0 )s^2 - α_1+ α_2 ζ_0
] ds
·( 1+O(k) ).
Note that
Φ (ξ) = ∫_0^ξ e^-s^2 ds= √(π)/2 +O (
e^-ξ^2/ξ) , ξ→ +∞ ,
∫_0^ξ s^2 e^-s^2 ds= -1/2ξ e^-ξ^2 +1/2Φ (ξ) ,
∫_0^ξ s^4 e^-s^2 ds= -1/2ξ^3 e^-ξ^2
-3/4ξ e^-ξ^2 +3/4Φ (ξ) .
We introduce
A(ζ_0) = √(π)/2( - 3/2γ + α_2ζ_0 -1/2α_1 - γ_0 ) .
Thus, for ϑ_1 (ξ ) one has
ϑ_1 (ξ ) =[ e^ξ^22/√(π) A(ζ_0) Φ (ξ)
+Q_3^(ϑ )(ξ) ] (1+O(k)),
where Q_3^(ϑ ) (ξ ) is a polynomial of the third order in ξ.
Using (<ref>), we may calculate ϑ_1 (ξ) at the point
ℳ^(1)
ϑ_1^(1)= ϑ_1 (ξ^(1)) = A(ζ_0)
k/J_0(1+O(k ln^2 k^-1)) +O(k^-3/2) .
From (<ref>) one gets that the i-th order approximation (ζ_i, J_i) satisfies a system of equations (i=1,2):
dζ_i/d ξ = ℛ_i^(ζ)(ξ),
d J_i/d ξ = ℛ_i^(J)(ξ).
Here for i=1
ℛ_1^(ζ)(ξ) = f_3/1-ϑ_0,
ℛ_1^(J)(ξ) = f_4/1-ϑ_0 e^-ϑ_0-ξ^2
and for i=2
ℛ_2^(ζ)(ξ) = f'_3,ηϑ_1 + f'_3,ζζ_1 + g_3/1-ϑ_0 +
f_3(ϑ_1 - f_1)/(1-ϑ_0)^2,
ℛ_2^(J)(ξ) = [
f'_4,ηϑ_1 + f'_4,ζζ_1 + g_4/1-ϑ_0 +
f_4(ϑ_0ϑ_1 - f_1)/(1-ϑ_0)^2 ]
e^-ϑ_0-ξ^2
and the functions f_i, g_i, i=1,…, 4 are evaluated at the point
(ξ, ϑ_0 +ξ^2 -1, ζ_0).
Taking into account (<ref>), the solution of the system (<ref>) has the form
ζ_i(ξ) = ∫_0^ξℛ_i^(ζ)(s) ds,
J_i(ξ) = ∫_0^ξℛ_i^(J)(s) ds.
Substituting (<ref>) into (<ref>) and then into (<ref>) and taking into account (<ref>), we get
ζ_1(ξ) = ∫_0^ξ[β_1(s^2-1) + β_2 ζ_0 ]
ds ·( 1+O(k) ),
J_1(ξ) = ∫_0^ξ e^-s^2[ - 2 γ s^4 +(α_1 -2γ_0 )s^2 -
α_1 + α_2 ζ_0 ] ds
·( 1+O(k) ).
Consequently,
ζ_1(ξ) = [ 1/3β_1 ξ^3- β_1ξ
+ β_2 ζ_0ξ] (1 +O(k)) ,
J_1 (ξ) = [ 2/√(π)
A(ζ_0) Φ (ξ) + e^-ξ^2 Q_3^(J )(ξ) ] (1+O(k )).
where Q_3^(J ) (ξ ) is a polynomial of the third order in ξ.
Then, using (<ref>), we have at the point ℳ^(1)
ζ_1^(1)=ζ_1(ξ^(1)) = k^-3/2[ 1/3β_1 + β_2 kζ_0 - 1/2(β_1+β_2kζ_0)k ln k^-1 - β_1k + O(k^2ln^2 k^-1)],
J_1^(1)=J_1 (ξ^(1)) = A(ζ_0)(1+O(k )) +O ( J_0/k^5/2).
For i=2 we use (<ref>) and
take into account (<ref>), (<ref>) to obtain
ℛ_2^(ζ)(ξ) =[ (β_1 ξ^2 + β_2 ζ_0 ) e^ξ^22/√(π) A(ζ_0) Φ (ξ) + Q_5^(ζ ) (ξ )
] (1+O(k)),
where Q_5^(ζ ) (ξ ) is a polynomial of the fifth order in ξ.
Note that due to (<ref>)
∫_0^ξ^(1) e^s^2Φ (s) ds = √(π)/4k^3/2/J_0 (1+O(k ln^2 k^-1)) ,
∫_0^ξ^(1) s^2 e^s^2Φ (s) ds = √(π)/4k^1/2/J_0 (1+O(k ln^2 k^-1)) .
Then for i=2 we get from (<ref>)
ζ_2^(1) = ζ_2(ξ^(1)) = A(ζ_0)
k^1/2/2J_0[ β_1 + β_2 kζ_0]
(1+O(k ln^2 k^-1)) + O(k^-3) .
Application of formulae (<ref>), (<ref>) and (<ref>) yields
ℛ_2^(J)(ξ) =[
e^-ξ^2 Q_7^(J) (ξ )
- ( α_1 ξ^2 + α_2 ζ_0 - 2α_1
)
2/√(π) A(ζ_0) Φ (ξ )
] (1+O(k)),
where Q_7^(J ) (ξ ) is a polynomial of the seventh order in ξ.
Note that due to (<ref>)
∫_0^ξ^(1)Φ (s) ds = √(π)/2 k^-1/2 (1+O(kln k^-1)) - 1/2 ,
∫_0^ξ^(1) s^2 Φ (s) ds = k^-3/2/3√(π)/2 (1+O(kln k^-1)) .
Then we get
J_2^(1)= J_2 (ξ^(1)) =
- k^-3/2 A(ζ_0)
( α_1/3 +α_2 k ζ_0 ) (1+O(kln k^-1 )) +O(k^-1).
Thus, the coordinates (ζ^(1), J^(1)) of the point ℳ^(1) which belongs to 𝒪_μ(ζ_0, J_0) ∩𝒟_1∩𝒟_2^+ are described by
ζ^(1) = ζ_0 + μζ_1^(1) + μ^2ζ_2^(1)+O(μ^3),
J^(1) = J_0 + μ J_1^(1) + μ^2 J_2^(1)+O(μ^3),
where ζ_i^(1) and J_i^(1) are defined by
(<ref>), (<ref>) and (<ref>).
§.§ Domain 𝒟_2^+
The aim of this subsection is to obtain the second order approximation for a point ℳ^(2)=(ζ^(2), J^(2)) ∈𝒪_μ(ζ_0, J_0) ∩𝒟_2^+∩𝒟_3.
We
fix the point ℳ^(2) by setting a value of the variable η corresponding to this point as
η^(2) = k^-1 + 1/2ln k^-1 - 1 .
To consider the system in the domain 𝒟_2^+ it is convinient to introduce new variables z and v by the following way:
ξ = k^-1/2(1-kz), η = k^-1+ v -1 .
Then the coordinate v corresponding to ℳ^(2)=(ζ^(2), J^(2)) is
v^(2)=1/2ln k^-1 .
On the other hand, the coordinates (z,v ) which correspond to the point ℳ^(1)=(ζ^(1), J^(1)) are
z^(1)=1/2ln k^-1 ,
v^(1) = η^(1)-k^-1+1=ϑ^(1)+(ξ^(1))^2-k^-1 .
Therefore the coordinate v^(1) depends on μ:
v^(1) = v_0^(1) + μ v_1^(1) + O(μ^2),
where, due to relations (<ref>), (<ref>), (<ref>),
v_0^(1) = -ln k^-1 + O(kln^2k^-1),
v_1^(1) = ϑ_1^(1) =
A(ζ_0) k/J_0·(1 + O(kln^2 k^-1))
+O(k^-3/2).
In terms of new variables the equations of motion (<ref>) and (<ref>) can be rewritten as
z' = 2k^-1/2z-k^1/2z^2+k^-1/2(v-1)-μ k^-1/2f_1 - μ^2 k^-1/2 g_1 ,
v' = 2k^-1/2 (1-kz)+μ f_2+μ^2g_2 ,
ζ ' = μ f_3
+ μ^2 g_3 ,
J ' = J_0[μ f_4
+ μ^2 g_4] e^-v ,
where the functions f_i, g_i, i=1,…, 4 are evaluated at the point
(k^-1/2(1-kz), k^-1-1+v, ζ).
Note that in the domain 𝒟_2^+ an expression kz< 1 and the derivative v' does not vanish. Thus, one can take v as a new independent variable and obtain equations on (z, ζ, J) as functions of v:
dz/dv = 2z+v-1-kz^2-μ f_1-μ^2 g_1/2(1-kz)+μ k^1/2f_2+μ^2 k^1/2 g_2 ,
dζ/dv = k^1/2(μ f_3+ μ^2 g_3)/2(1-kz)+μ k^1/2f_2+μ^2 k^1/2 g_2 ,
d J/dv = J_0 k^1/2(μ f_4+ μ^2 g_4)/2(1-kz)+μ k^1/2f_2+μ^2 k^1/2 g_2] e^-v .
We supply (<ref>) by initial conditions corresponding to the point ℳ^(1):
z(v^(1)) = z^(1), ζ(v^(1)) = ζ^(1),
J(v^(1)) = J^(1)
and find the solution satisfying (<ref>) and (<ref>) in a form
z(v) = z_0(v) +μ z_1 (v) +O(μ^2),
ζ (v) = ζ_0 +μζ_1(v) +μ^2ζ_2 (v) +O(μ^3),
J(v) = J_0+μ J_1 (v) +μ^2 J_2(v) +O(μ^3) ,
where z_0(v) corresponds to the unperturbed orbit 𝒪(ζ_0, J_0).
The initial conditions (<ref>) are set at the point v^(1) which depends on μ.
Using the Taylor formula with respect to μ, one may reformulate these conditions at the point v_0^(1) as follows:
z_0(v_0^(1)) = z_0^(1) , z_1(v_0^(1)) = - d z_0/d v(v_0^(1))· v_1^(1),
ζ_1(v_0^(1)) = ζ_1^(1), ζ_2(v_0^(1)) = ζ_2^(1) - d ζ_1/d v(v_0^(1))· v_1^(1),
J_1(v_0^(1)) = J_1^(1), J_2(v_0^(1)) = J_2^(1) - d J_1/d v(v_0^(1))· v_1^(1) .
Substituting (<ref>) into (<ref>) and collecting the terms of the same order of μ, one gets equations for the components z_i, ζ_i, J_i.
We solve these equations with the initial conditions (<ref>) to obtain
ζ_i(v) and J_i(v). Finally, substituting v=v^(2), one gets the point ℳ^(2) =(ζ^(2), J^(2)).
Thus, our task is to derive asymptotic formulae for ζ_1,2 (v^(2)) and J_1,2(v^(2)). We begin with auxilary asymptotics for z_0(v) and z_1(v).
Note that z_0(v) corresponds to the unperturbed orbit 𝒪(ζ_0, J_0). Hence, due to (<ref>), it is a solution of the following equation
z_0 - 1/2 k z_0^2 = 1/2( e^v - v).
Since k ( e^v - v)≪ 1 in the domain 𝒟_2^+, the function z_0(v) admits the following asymptotics
z_0 (v) = 1/2( e^v - v )
(
1 + O( k( e^v - v ) ) ).
It is not difficult to verify that in 𝒟_2^+
k z_0(v) =O(k^1/2).
From (<ref>) one may deduce that equation for z_1 as a function of v can be written as
d z_1/d v = (1 + kdz_0/dv/1-kz_0) z_1
+ ℛ_1^(z)(v),
where
ℛ_1^(z)(v)
= - f_1/2(1-k z_0(v)) -
k^1/2(2z_0(v) + v- 1 -kz_0^2) f_2/4(1-k z_0(v))^2
= O( k^-3/2 ).
Then
z_1 (v) = e^∫_v_0^(1)^v [1 + kdz_0/ds/1-kz_0]ds( z_1( v_0^(1))
+ ∫_v_0^(1)^v
e^-∫_v_0^(1)^s [1 + kdz_0/dp/1-kz_0]dp·ℛ_1^(z)(s)ds ) .
Due to (<ref>) and (<ref>) we have
∫_v_0^(1)^v [1 + kdz_0/ds/1-kz_0]ds = v+ ln k^-1 +O(k^1/2).
Then, using (<ref>), (<ref>) and (<ref>), one gets
z_1(v)= e^v/2J_0A(ζ_0)
(1+O(k^1/2)).
Finally, (<ref>) yields
z_1(v^(2)) = k^-1/2/2J_0 A(ζ_0)
(1+O(k^1/2)).
We substitute (<ref>) into (<ref>) and obtain equations for ζ_i and J_i (i=1,2) in the following form
dζ_i/d v = ℛ_i^(ζ)(v),
d J_i/d v = ℛ_i^(J)(v),
where for i=1:
ℛ_1^(ζ)(v) =
k^1/2f_3/2(1-k z_0(v))
,
ℛ_1^(J)(v) =
J_0k^1/2f_4/2(1-k z_0(v)) e^-v ,
and for i=2:
ℛ_2^(ζ)(v) =
k^1/2[
- k^1/2 f'_3,ξ· z_1(v) + f'_3,ζ·ζ_1(v) +
g_3/2(1-k z_0(v)) +
f_3· (2 k z_1(v) - k^1/2 f_2)/4(1-k z_0(v))^2],
ℛ_2^(J)(v) =
J_0 k^1/2[
- k^1/2 f'_4,ξ· z_1(v) + f'_4,ζ·ζ_1(v) +
g_4/2(1-k z_0(v)) +
f_4(2 k z_1(v) - k^1/2f_2)/4(1-k z_0(v))^2]
e^-v.
Here the functions f_i, g_i are evaluated at the point (ξ, η, ζ) = (k^-1/2(1-k z_0(v)), k^-1 + v - 1, ζ_0).
The solutions of (<ref>) are
ζ_i(v) = ζ_i( v_0^(1)) +∫_v_0^(1)^vℛ_i^(ζ)(s) ds,
J_i(v) = J_i( v_0^(1))+∫_v_0^(1)^vℛ_i^(J)(s) ds.
Due to (<ref>), (<ref>) and the estimate (<ref>) one may deduce from (<ref>) that the functions ℛ_1^(ζ)(v) and ℛ_1^(J)(v) can be written as
ℛ_1^(ζ)(v)
= 1/2 k^-1/2 (β_1 + β_2 k ζ_0) + O(k^1/2ln k^-1) ,
ℛ_1^(J)(v)
= -γ J_0 k^-3/2 e^-v (1+O(k^1/2))
.
Substituting v=v^(2) = 1/2ln k^-1 into (<ref>)
and taking into account the formulae
(<ref>), (<ref>), (<ref>), one obtains
ζ_1^(2)= ζ_1(v^(2)) = k^-3/2[ 1/3β_1 + β_2 kζ_0 + 1/4(β_1+β_2kζ_0)k ln k^-1 - β_1 k +
O(k^3/2) ]
,
J_1^(2)=J_1(v^(2)) = A(ζ_0)(1+O(k ))
+ O(J_0 k^-5/2 ).
The formula (<ref>) for i=2 together with (<ref>) leads to
ζ_2(v^(2)) = ζ_2^(1) - d ζ_1/d v(v_0^(1))· v_1^(1) +∫_v_0^(1)^v^(2)ℛ_2^(ζ)(s) ds,
J_2(v^(2)) = J_2^(1) - d J_1/d v(v_0^(1))· v_1^(1) +∫_v_0^(1)^v^(2)ℛ_2^(J)(s) ds.
Taking into account (<ref>), (<ref>) and (<ref>),
one gets
- d ζ_1/d v(v_0^(1))· v_1^(1) =
- A(ζ_0)k^1/2/2J_0(β_1 +β_2kζ_0)(1+O(k^1/2)) ,
- d J_1/d v(v_0^(1))· v_1^(1) = γ k^-3/2 A(ζ_0) (1+O(k^1/2)) .
Then (<ref>), (<ref>), (<ref>)
yield
ℛ_2^(ζ)(v)
= k^1/2/4 J_0 A(ζ_0)
(β_1 +β_2 kζ_0) e^v (1+O(k^1/2))
,
ℛ_2^(J)(v)
= O( k^-1/2 ) .
Hence
∫_v_0^(1)^v^(2)ℛ_2^(ζ)(s) ds
= 1/4 J_0 A(ζ_0)
(β_1 +β_2 kζ_0) (1+O(k^1/2))
,
∫_v_0^(1)^v^(2)ℛ_2^(J)(s) ds
= O( k^-1/2ln k^-1 ) .
Substituting these formulae together with (<ref>), (<ref>) into (<ref>), we get
ζ_2^(2) =ζ_2(v^(2)) = A(ζ_0)1/4J_0[ β_1 + β_2 kζ_0]
(1+O(k^1/2))
,
J_2^(2) =J_2 (v^(2)) = -A(ζ_0)k^-3/2( α_1/3 +α_2 k ζ_0 - γ)(1+O(k^1/2)).
Consequently, the point ℳ^(2) is characterized by
ζ^(2)=ζ_0 +μζ_1^(2)
+μ^2 ζ_2^(2) +O(μ^3),
J^(2)=J_0 +μ J_1^(2)
+μ^2 J_2^(2) +O(μ^3),
where ζ_i^(2) and J_i^(2) are defined by (<ref>) for i=1 and (<ref>) for i=2.
§.§ Domain 𝒟_3
In this subsection we derive an asymptotic for the Poincaré map F(μ). In the domain 𝒟_3 it is convenient to introduce variables
y = k^1/2ξ, v = η - k^-1 + 1.
In terms of the variables (y, v, ζ) the section S_+ can be rewritten as
S_+ = {(y, v, ζ)∈ U_μ(𝒪(ζ_0, J_0)): y=0, v > 0}
and the equations of motion (<ref>) and (<ref>) take the form
y' = k^-1/2 (y^2 -1 - k(v-1)) +μ k^1/2f_1 +μ^2 k^1/2 g_1 ,
v' = 2k^-1/2y +μ f_2 +μ^2 g_2 ,
ζ ' = μ f_3
+ μ^2 g_3 ,
J ' = J_0( μ f_4
+ μ^2 g_4 ) e^-v ,
where the functions f_i, g_i are evaluated at the point
(ξ, η, ζ) = (k^-1/2y, v + k^-1 - 1, ζ).
One may note that in the domain 𝒟_3 the derivative y' does not vanish. Hence, we may set y as a new independent variable and consider (v,ζ ,J) as functions of y. Then evolution of (v,ζ ,J) is described by the following equations
d v/d y = 2y +μ k^1/2f_2 +μ^2 k^1/2 g_2/(y^2 -1 - k(v-1)) +μ k f_1 +μ^2 k g_1 ,
d ζ/d y = k^1/2( μ f_3 + μ^2 g_3 ) /(y^2 -1 - k(v-1)) +μ k f_1 +μ^2 k g_1 ,
d J/d y = J_0 k^1/2( μ f_4 + μ^2 g_4 )/(y^2 -1 - k(v-1)) +μ k f_1 +μ^2 k g_1 e^-v.
We supply these equations by initial conditions corresponding to the point
ℳ^(2):
v(y^(2)) = v^(2), ζ(y^(2)) = ζ^(2), J(y^(2)) = J^(2),
where ζ^(2), J^(2) and v^(2) are defined by (<ref>) and (<ref>).
One represents the solution satisfying (<ref>) and (<ref>) in the form
v(y) = v_0(y) +μ v_1 (y) +O(μ^2),
ζ (y) = ζ_0 +μζ_1(y) +μ^2ζ_2 (y) +O(μ^3),
J(y) = J_0+μ J_1 (y) +μ^2 J_2(y) +O(μ^3),
where v_0(y) corresponds to the unperturbed orbit 𝒪(ζ_0, J_0).
Taking into account relations y=1-kz and v^(2)=1/2ln k^-1 and expanding y^(2) as
y^(2) = y_0^(2) + μ y_1^(2) + O(μ^2),
one obtains from (<ref>)
y_0^(2)= 1-kz_0(v^(2)) = 1 - 1/2k^1/2 + 1/4kln k^-1 + O(k)
and due to (<ref>)
y_1^(2)= - kz_1(v^(2)) = -A(ζ_0) k^1/2/2J_0·(1 + O(k^1/2)) + O(k^-5/2).
Then, using the Taylor formula with respect to μ, we shift initial conditions (<ref>) to the point y=y_0^(2) as follows:
v_0(y_0^(2)) = v_0^(2) , v_1(y_0^(2)) =
- d v_0/d y(y_0^(2))· y_1^(2),
ζ_1(y_0^(2)) = ζ_1^(2), ζ_2(y_0^(2)) = ζ_2^(2) - d ζ_1/d y(y_0^(2))· y_1^(2),
J_1(y_0^(2)) = J_1^(2), J_2(y_0^(2)) =
J_2^(2) - d J_1/d y(y_0^(2))· y_1^(2) .
Substituting (<ref>) into (<ref>) and expanding with respect to μ, one obtains equations for the i-th approximations ζ_i and J_i:
dζ_i/d y = ℛ_i^(ζ)(y),
d J_i/d y = ℛ_i^(J)(y).
The solution of this system is
ζ_i(y) = ζ_i(y_0^(2))+∫_y_0^(2)^yℛ_i^(ζ)(s) ds,
J_i(v) = J_i(y_0^(2))+∫_y_0^(2)^yℛ_i^(J)(s) ds.
For i=1 one has
ℛ_1^(ζ)(y) =
k^1/2f_3/y^2-1-k (v_0(y)-1),
ℛ_1^(J)(y) =
J_0k^1/2f_4/y^2-1-k (v_0(y)-1) e^-v_0(y),
and for i=2
ℛ_2^(ζ)(y) =
k^1/2f'_3,η· v_1(y) + f'_3,ζ·ζ_1(y)+g_3/y^2-1-k (v_0(y)-1) +
k^3/2f_3· (v_1(y) - f_1)/(y^2-1-k (v_0(y)-1))^2,
ℛ_2^(J)(y) =
J_0k^1/2[
(f'_4,η - f_4) · v_1(y) + f'_4,ζ·ζ_1(y)+g_4/y^2-1-k (v_0(y)-1) +
kf_4· (v_1(y) - f_1)/(y^2-1-k (v_0(y)-1))^2] e^-v_0(y),
where the functions f_i, g_i are evaluated at the point
(ξ, η, ζ) = (k^-1/2y, k^-1 + v_0(y) -1, ζ_0).
Our task is to derive asymptotic formulae for ζ_1,2 (0) and J_1,2(0). We begin with auxilary asymptotics for v_0(y) and v_1(y).
The function v_0(y) satisfies the following equation
d v_0/d y = 2y/y^2 - 1 - k(v_0-1).
As v_0(y) corresponds to the unperturbed orbit 𝒪(ζ_0, J_0) then due to (<ref>) it is described implicitly by an equality
e^v_0=k^-1(1-y^2+kv_0).
Hence, in the domain 𝒟_3 it admits an esimate
v_0 (y) = ln k^-1 +ln (1-y^2) + O(k^1/2ln k^-1).
Applying (<ref>), we obtain the following equation for v_1(y):
d v_1/d y = k2 y/(y^2 - 1 - k(v_0(y)-1))^2 v_1
+ ℛ_1^(v)(y),
where
ℛ_1^(v)(y) =
k^1/2 f_2/y^2-1-k (v_0(y)-1) -
k2 y f_1/(y^2-1-k (v_0(y)-1))^2
and the functions f_i are evaluated at the point
(ξ, η, ζ) = (k^-1/2y, k^-1 + v_0(y) -1, ζ_0).
The solution of this equation is
v_1 (y) = e^k∫_y_0^(2)^y2 s/(s^2 - 1 - k(v_0(s)-1))^2ds( v_1(y_0^(2)) + ∫_y_0^(2)^y
e^-k∫_y_0^(2)^s2 p/(p^2 - 1 - k(v_0(p)-1))^2dp·ℛ_1^(v)(s) ds ).
Note that in the domain 𝒟_3 one has
1/y^2-1 = O(k^-1/2), v_0(y)=O(ln k^-1),
y^2-1-k(v_0-1)=(y^2-1)(1+O(k^1/2ln k^-1)),
e^± k∫_y_0^(2)^y2 s/(s^2 - 1 - k(v_0(s)-1))^2ds = 1+O(k^1/2) ,
ℛ_1^(v)(y)=O(k^-3/2) ,
∫_y_0^(2)^y ℛ_1^(v)(s) ds=O(k^-3/2) .
Taking into account (<ref>) and using (<ref>), (<ref>), (<ref>), (<ref>),
one concludes from (<ref>) that
v_1(y) = - A(ζ_0)/J_0(1+ O
(k^1/2ln k^-1)
)+ O(k^-3/2).
Then (<ref>) with (<ref>), (<ref>) yield
ℛ_1^(ζ)(y) = - k^-1/2 (β_1 +β_2 k ζ_0)/1-y^2
(1+O(k^1/2ln k^-1)) ,
ℛ_1^(J)(y) = O ( J_0k^-1/2/(1-y^2)^2) .
Consequently,
∫_y_0^(2)^0 ℛ_1^(ζ)(s) ds =
k^-1/2 (β_1 + β_2 k ζ_0)
( 1/4ln k^-1 + 1/2ln 4 )
+O( ln k^-1),
∫_y_0^(2)^0 ℛ_1^(J)(s) ds = O( k^-1 J_0 ) .
Taking into account (<ref>) and (<ref>) together with (<ref>), one gets
ζ_1^(3) = k^-3/2[ 1/3β_1 + β_2 kζ_0 + 1/2(β_1+β_2kζ_0)k (ln k^-1 + ln 4) - β_1k +
O(k^3/2ln k^-1)],
J_1^(3) = A(ζ_0)
(1+O(k)) + O(J_0 k^-1) .
For the second order terms, due to (<ref>) and (<ref>), we have:
ζ_2(0) = ζ_2^(2) - d ζ_1/d y(y_0^(2))· y_1^(2)
+∫_y_0^(2)^0ℛ_2^(ζ)(s) ds,
J_2(0) = J_2^(2) - d J_1/d y(y_0^(2))· y_1^(2)
+∫_y_0^(2)^0ℛ_2^(J)(s) ds,
where ζ_2^(2) and J_2^(2) are defined by (<ref>),
- d ζ_1/d y(y_0^(2))· y_1^(2)
= - ℛ_1^(ζ)(y _0^(2))· y_1^(2),
- d J_1/d y(y_0^(2))· y_1^(2)
= - ℛ_1^(J)(y_0^(2))· y_1^(2) .
Formulae (<ref>), (<ref>), (<ref>) imply
- d ζ_1/d y(y_0^(2))· y_1^(2) =
- A(ζ_0) k^-1/2/2J_0
(β_1 +β_2 kζ_0)
( 1+ O (k^1/2ln k^-1) ,
- d J_1/d y(y_0^(2))· y_1^(2) = O(k^-1) .
Application of (<ref>), (<ref>), (<ref>) leads to the following estimates
ℛ_2^(ζ)(y) =
O( k^1/2/J_0·1/(1-y^2)^2) ⇒∫_y_0^(2)^0 ℛ_2^(ζ)(s) ds = O( 1 /J_0) ,
ℛ_2^(J)(y) = O( k^1/2/(1-y^2)^3) ⇒∫_y_0^(2)^0 ℛ_2^(J)(s) ds = O(k^-1/2).
Therefore, we have the following asymptotics for the second order approximations
ζ_2^(3) = -A(ζ_0)k^-1/2/2J_0[ β_1 + β_2 kζ_0 ]
(1+O(k^1/2ln k^-1)) + O ( 1/J_0),
J_2^(3) = -A(ζ_0)k^-3/2( α_1/3 +α_2 k ζ_0 - γ)(1+O(k^1/2)) + O(k^-1).
Thus, the Poincaré map F(μ) can be represented as
F(μ ) : (
[ ζ_0; J_0 ])
↦(
[ ζ^(3); J^(3) ]) =
(
[ ζ_0 + μζ_1^(3)+μ^2 ζ_2^(3) +O(μ^3); J_0 +μ J_1^(3) +μ^2 J_2^(3) +O(μ^3) ])
,
where ζ_i^(3) and J_i^(3) are defined by (<ref>) for i=1 and (<ref>) for i=2.
§ FIXED POINTS AND PERIOD-DOUBLING BIFURCATIONS
In this section we derive conditions on the parameters of the normal form which lead to existence of a fixed point of the Poincaré map and its period-doubling bifurcation.
If (ζ_0,J_0)∈ S_- is a fixed point then according to (<ref>) the following condition should be satisfied:
ζ_1^(3)=0, J_1^(3)=0.
Taking into account (<ref>), (<ref>) and definition of the parameter k, one may rewrite these conditions as
β_2ζ_0/ln J_0^-1[1 + lnln J_0^-1+ln 4/2ln J_0^-1] +
β_1[1/3 + lnln J_0^-1 + ln 4 -2/2ln J_0^-1] =
O(lnln J_0^-1/ln^3/2 J_0^-1),
- 3/2γ + α_2ζ_0 -1/2α_1 - γ_0
= O( J_0ln^2 J_0^-1 ).
Note that the coefficient γ_0 in (<ref>) is the only one which depends on the ratio, δ/ε, of parameters of the initial problem (<ref>), namely:
γ_0 = ϰσ + ν,
ϰ = √(2/-D)F”_xδ,
ν = √(2/-D)D'_x/F”_x^2,
σ=δ/ε,
where the functions F”_xδ, F”_x^2, D, D'_x are evaluated at
(x,y,z,δ)=(0,0,0,0).
We solve the first equation with respect to ζ_0 and substitute the solution into the second one. Then the second condition (<ref>) can be considered as an equation defining J_0 in terms of the parameter σ:
ζ_0 = -β_1/3β_2ln J_0^-1(1 +
lnln J_0^-1 + ln 4 - 3/ln J_0^-1) +
O(lnln J_0^-1/ln^1/2 J_0^-1),
σ = - α_2β_1/3β_2ϰln J_0^-1(1 + lnln J_0^-1 + ln 4 - 3/ln J_0^-1) -
3γ+α_1+2ν/2ϰ +
O(lnln J_0^-1/ln^1/2 J_0^-1).
We emphasize here that due to J_0≪ 1 the ratio σ satisfies σ≫ 1.
One may also note that due to smooth dependence of solutions on initial conditions asymptotics (<ref>) and (<ref>) being differentiated with respect to (ζ_0, J_0) remain valid in U_μ(𝒪(ζ_0, J_0)). This leads to
Tr D_(ζ_0, J_0) F_2
= - √(π) k^-1/2/4 J_0α_2 (β_1 + β_2 k ζ_0)
( 1 +O(k^1/2ln k^-1) ) +O(k^-5/2).
Substituting this into (<ref>) and taking into account (<ref>), one gets a condition for the period-doubling bifurcation of the periodic trajectory corresponding to initial point (ζ_0, J_0):
1-μ^2 √(π)α_2β_1ln^1/2J_0^-1/6 J_0·(1 + O(lnln J_0^-1/ln^1/2J_0^-1) )
+ O(μ^2ln^-5/2J_0^-1)=0,
where J_0 satisfies (<ref>).
We solve this equation with respect to J_0 and substitute the solution into (<ref>). Then, taking into account the relation ε=μ^2, one obtains that the first period-doubling bifurcation occurs at
ζ_0^* = -β_1/3β_2[lnε^-1 +
1/2lnlnε^-1 -
ln(√(π)α_2β_1 e^3/24)] +
O(lnlnε^-1/ln^1/2ε^-1),
J_0^* = √(π)α_2β_1/6εln^1/2ε^-1(1 + O(lnlnε^-1/lnε^-1)),
δ^* =
[t]
- α_2β_1/3β_2ϰε[lnε^-1 +
1/2lnlnε^-1 -
ln(√(π)α_2β_1 e^3/24)]
-
-3γ+α_1+2ν/2ϰε +
O(εlnlnε^-1/ln^1/2ε^-1).
It is to be noted that (<ref>), (<ref>) hold true provided ϰ 0, α_2 0, β_1,2 0. We apply (<ref>), (<ref>), (<ref>) to obtain:
β_1 = √(2)/F”_x^2√(-D)( G'_1x| [ F”_x^2 G”_1x^2 G”_2x^2; 0 G'_1x G'_2x; F'_y G'_1y G'_2y ]| + G'_2x| [ F”_x^2 G”_1x^2 G”_2x^2; 0 G'_1x G'_2x; F'_z G'_1z G'_2z ]|
),
α_2 = √(2)/(-D)^5/2( F'_y| [ F”_x^2 F”_xy F”_xz; 0 F_y F_z; G'_1x G'_1y G'_1z ]| + F'_z| [ F”_x^2 F”_xy F”_xz; 0 F'_y F'_z; G'_2x G'_2y G'_2z ]|
),
β_2 = √(2)/(-D)^3/2| [ 0 F'_y F'_z; G'_1x G'_1y G'_1z; G'_2x G'_2y G'_2z ]| .
If all coefficients ϰ, α_2, β_1,2 vanish then fixed point does not exist for small values of J_0. In other cases one needs to perform further asymptotic analysis to obtain conditions for existence of a periodic orbit and its period-doubling bifurcation.
One may also remark that the distance between the fixed point and the fold is
ρ = δ F”_xδ√(F'^2_y + F'^2_z/(F'_yF”_xz)^2 + F”^2_x^2(F'^2_y + F'^2_z)) + O(δ^2).
Thus, the cascade of period-doubling bifurcations occurs when the equilibrium is not very close to the fold, but situated at a distance of the order
O(εlnε^-1).
§ EXAMPLE: THE FITZHUGH-NAGUMO SYSTEM
We apply our results to the FitzHugh-Nagumo system (<ref>).
Let
δ = 1-a, a+x=:x, y+a-a^3/3 =:y.
Then the system takes the form:
εẋ = -1/3 x^3 + x^2 (1-δ ) +x(2δ -δ^2)-y-z ,
ẏ = x ,
ż = x-z
and the functions F, G_1 and G_2 are
F(x,y,z,δ) =-1/3 x^3 + x^2 (1-δ ) +x(2δ -δ^2)-y-z , G_1 (x,y,z,δ) =x
, G_2 (x,y,z,δ) = x-z.
Note that the conditions (<ref>), (<ref>) and (<ref>) are satisfied.
Make a rescaling of parameters
ε = μ^2 , δ = μ^2 σ,
and introduce new variables (ξ, η, ζ) by
x = μξ ,
y = μ^2/2 (η-ζ) ,
z = μ^2/2 (η +ζ) .
Then the inverse change gives
μ= √(ε), σ= δε^-1 and
ξ = 1/μ x , η = 1/μ^2( y+z) , ζ = 1/μ^2( - y+z ) .
In terms of these variable the FitzHugh-Nagumo system takes the form
ξ' = ξ^2-η
+ μ( 2σξ - 1/3ξ^3 ) - μ^2 σξ^2 -μ^3σ^2ξ ,
η' = 2ξ + μ( -1/2η - 1/2ζ) ,
ζ' = μ( -1/2η - 1/2ζ) .
Then
γ_0=2σ, γ=-1/3 , α_1=α_2=β_1=β_2=-1/2, ϰ = 2, ν = 0
and condition (<ref>) reads:
δ =
1/12ε[lnε^-1 +
1/2lnlnε^-1 -
ln(√(π) e^3/96)] +
3/8ε +
O(εlnlnε^-1/ln^1/2ε^-1).
We compare our results with numerical data obtained by M. Zaks and found sufficiently good agreement.
10
FenN. Fenichel, Geometric singular perturbation theory for ordinary differential equations, J. Diff. Eqns 31 (1979), pp. 53 – 98
Nei87A.I. Neishtadt. Persistence of stability loss for dynamical bifurcations. I. Differential Equations Translations, 23:1385 – 1391, 1987
Nei88A.I. Neishtadt. Persistence of stability loss for dynamical bifurcations. II. Differential Equations Translations, 24:171 – 176, 1988
SzmP. Szmolyan. A singular perturbation analysis of the transient semiconductor-device equations. SIAM J. Appl. Math., 49(4):1122–1135, 1989.
KruSzmM. Krupa, P. Szmolyan, Extending geometric singular perturbation theory to nonhyperbolic points - fold and canard points in two dimensions, SIAM J. Math. Anal., 33 (2) (2001), pp. 286 – 314
KuehnC. Kuehn, Multiple Time Scale Dynamics, Applied mathematical Sciences, Vol. 191, Springer, 2015
PonL. S. Pontryagin. Asymptotic behavior of solutions of systems of differential equations with a small parameter in the derivatives of highest order, Izv. Akad. Nauk SSSR. Ser. Mat., 21 (1957), pp. 605–626.
TikhA. N. Tikhonov, Systems of differential equations containing a small parameter multiplying the derivative. (In Russian.) Mat. Sb. 31(73) (1952), pp. 575 - 586.
ZaksM. Zaks, On Chaotic Subthreshold Oscillations in a Simple Neuronal Model., Math. Model. Nat. Phenom., Vol. 6, No. 1, 2011, pp. 149–162
|
http://arxiv.org/abs/2307.02526v1
|
20230705180000
|
Holographic models of non-Fermi liquid metals revisited: an effective field theory approach
|
[
"Dominic V. Else"
] |
cond-mat.str-el
|
[
"cond-mat.str-el",
"hep-th"
] |
High-Energy Collision of Quarks and Hadrons in the Schwinger Model:
From Tensor Networks to Circuit QED
Dominic V. Else
Perimeter Institute for Theoretical Physics
August 1, 2023
==========================================================================================================
Accessing the physics of strongly coupled metals in a controlled way is a challenging problem in theoretical condensed matter physics. In this paper, we revisit the possibility of understanding strongly coupled metals through a holographic duality with a weakly coupled gravitational theory in one higher dimension (i.e. a suitable generalization of the “AdS/CFT duality”). Previous attempts at devising holographic models of strongly coupled metals have suffered from severe drawbacks; for example, they do not even seem to be able to describe a Fermi surface that satisfies Luttinger's theorem, which is ought to be a core requirement in any physically reasonable model of a metal. Here, we propose a radically different approach to constructing holographic models of strongly coupled metals. The idea is that for applications, it should be sufficient to construct a holographic dual of the effective field theory that controls the infra-red physics of the metal. We invoke recent work that has identified a precise criterion for such an effective field theory to be “emergeable” from a continuum ultra-violet (UV) theory at nonzero charge density (or its equivalent in lattice models, namely an incommensurate charge filling). We show that imposing this criterion leads to a holographic model of a strongly coupled metal with physically reasonable properties, including a Fermi surface satisfying Luttinger's theorem. We discuss a possible physical interpretation of our results.
§ INTRODUCTION
Understanding strongly coupled quantum many-body phases of matter is a crucial problem in condensed matter physics. One important class of such phases of matter are the so-called “non-Fermi liquids”, which are metals that are not described by the conventional weakly-coupled Fermi liquid theory. Non-Fermi liquid physics is believed, for example, to be behind the exotic “strange-metal” regime seen in high-T_c cuprates <cit.> as well as other classes of materials <cit.>.
The strongly-coupled nature of non-Fermi liquids has made it challenging to find models in which any physics can be obtained in a controlled way.
One seemingly appealing strategy would be to invoke the idea of holography, or “AdS/CFT” <cit.>, in which certain strongly coupled quantum field theories (QFTs) are held to be dual to a weakly coupled quantum gravity theory in one higher space-time dimension. In an appropriate limit of the QFT, the dual gravitational theory can be treated classically, and the physics of the strongly coupled QFT can be extracted simply by solving the classical equations of motion in the dual theory.
Highly non-trivial quantum many-body effects in the QFT, such as thermalization and dissipation, can be “geometrized”, originating in the dual theory from the presence of a black hole.
Although such an approach has led to powerful insights in other areas <cit.>, the situation for non-Fermi liquid metals is not very satisfactory, despite a plethora of studies. The usual approach is that one starts from some gravitational theory that is supposed to be dual to a strongly coupled conformal field theory (CFT) (or a more general strongly coupled gapless theory without Lorentz or conformal invariance), and then imagines perturbing the field theory by switching on a nonzero charge density, leading to an RG flow to some new infra-red (IR) fixed-point which presumably describes some kind of strongly coupled metal. In the dual theory this corresponds to introducing an electric field, which backreacts on the metric, inducing a new geometry.
Unfortunately such models seem to inevitably have various pathologies. In the simplest model <cit.>, the bulk gravitational geometry is the so-called AdS-Reissner-Nordström metric, and the IR regime contains a charged black hole. The problem is that the Bekenstein-Hawking entropy of the black hole implies that the dual QFT has a nonzero entropy density even at zero temperature. Although this is of course similar to what happens in the Sachdev-Ye-Kitaev (SYK) model <cit.>, it seemingly contradicts the Third Law of Thermodynamics and seems very unlikely to occur in a realistic system without fine-tuning. By considering variants of this model with different values of the dynamical critical exponent and hyperscaling violation exponent <cit.> it is possible to eliminate the zero temperature entropy density, but this often comes at the expense of introducing other pathologies such as naked singularities in the gravitational theory (although these singularities may be considered acceptable <cit.> in the sense that they could be resolvable in a quantum gravity theory).
From our point of view, however, the most serious issue with these models is that they do not seem to capture the Fermi surface. In Fermi liquid theory, the “Fermi surface” – the codimension-1 surface in momentum space where the low-energy quasiparticles live – is crucial to the physics. Although non-Fermi liquids generally do not have quasiparticles, to the extent that we understand non-Fermi liquid physics in non-holographic models (for example, the “Hertz-Millis” type theories of quantum critical points <cit.>), a generalized notion of Fermi surface still appears to be key to the physics. Another important aspect of the Fermi surface is Luttinger's theorem <cit.>, which relates the volume enclosed by the Fermi surface to the microscopic charge density; in a rough sense, one should think of the portion of momentum space enclosed by the Fermi surface, known as the “Fermi sea”, as being “where the UV charge goes in the IR”. Although originally described for Fermi liquid theory, Luttinger's theorem is now understood to be much more general <cit.>.
Although in some cases one finds Fermi surfaces in holographic models of non-Fermi liquids <cit.>, they are generally “small” Fermi surfaces that do not satisfy Luttinger's theorem on their own, raising the question of what happened to the remainder of the UV charge (this can be traced back to the fact that most of the charge in the gravitational theory is hidden inside the black hole <cit.>). Attempts have been made <cit.> to draw analogies with phenomena known in condensed matter, such as (a) “fractionalized” phases such as the so-called fractionalized Fermi liquid (“FL*”) <cit.> in which a portion of the charge density is attributed to the topological degrees of freedom and does not contribute to the Fermi surface volume; or (b) systems such as the “composite Fermi liquid” <cit.> in which the Fermi surface relates to particles which are charged under an emergent deconfined gauge field, and hence is “hidden”, i.e. not easily detectable by gauge-invariant operators.
However, we feel that such analogies are highly misleading. In both cases (a) and (b), the system has only a microscopic lattice translation symmetry[In some cases, such as the composite Fermi liquid, the symmetry can be extended to a continuous translation symmetry, but one which is “magnetic”, i.e. the symmetry group is a non-trivial central extension of ℝ^d by U(1), as occurs in a uniform magnetic field. We do not want to consider such magnetic translation symmetries. The lattice translation symmetry we are referring would correspond to a commuting subgroup of the full non-commutative magnetic translation symmetry.]. One can show using the general methods of Refs. <cit.> that these mechanisms for a violation of Luttinger's theorem cannot be extended to a system with microscopic continuous translation symmetry. Therefore, in systems with microscopic translation symmetry, that are not superconducting, Luttinger's theorem should be considered a non-negotiable requirement, in contradiction to what one seems to find in the holographic models.
There are some suggestions that one does recover a Fermi surface satisfying Luttinger's theorem if one considers quantum gravity corrections in the gravitational theory <cit.>. On the other hand, since the Fermi surface is presumably central to the low-energy physics, this eliminates much of the original appeal of the holographic approach, namely that one can understand the physics of a strongly coupled system solely by solving classical equations of motion.
§.§ A new approach: holographic effective field theory
In this work, we wish to advocate an alternative approach to developing holographic models of non-Fermi liquids. In condensed matter physics, one normally does not try to exactly solve a microscopic lattice model at all scales. Instead, one invokes the concept of emergence – the IR physics, i.e. the physics at sufficiently long wavelengths, low frequency, and low temperature should be captured by an effective field theory, and one seeks to understand the nature of this effective field theory and not to worry about how exactly it emerges from the microscopic model. In the language of RG, the microscopic lattice model can be viewed as a UV theory which flows to a stable fixed-point in the IR, and one seeks to understand this IR fixed-point, not the details of how exactly the RG flow runs starting from the UV. indeed, Fermi liquid theory itself is best viewed from this perspective <cit.>.
In holography, the additional spatial coordinate in the higher-dimensional space-time can be interpreted with respect to the dual QFT as an “RG parameter”. Thus, in the holographic models discussed previously, what one is effectively attempting to do is to take a UV theory (e.g. some strongly coupled CFT), perturb it in some way (by switching on a nonzero charge density) and then study the entire RG flow from the UV theory (corresponding to near-boundary region of the bulk space-time) to the IR fixed point (corresponding to the region of the bulk space-time far away from the boundary). This is much more ambitious than what one typically attempts to do in condensed matter physics. Moreover, the relevance to condensed matter physics is in any case limited, since in condensed matter the UV theory will always be some lattice model, not a continuum field theory.
Therefore, what we advocate in this paper is to give up on this goal, and instead come up with a holographic formulation of a plausible IR effective field theory of a metal. This raises the obvious question, however, of what criteria we should use to judge a potential IR theory. Ultimately, of course, one must judge it by comparisons to experiment. However, in the meantime a useful criterion is the one which has been dubbed “emergeability” <cit.>: given a lattice model with certain properties (e.g. symmetries such as charge conservation and lattice translation symmetry), under which circumstances is it theoretically possible for a given effective field theory to arise as the low-energy description of the lattice model? Specifically, there are certain matching conditions between the UV and IR that must be satisfied.
An important example of such matching conditions are the so-called “filling constraints” <cit.>. If we have a lattice system in d spatial dimensions with U(1) charge conservation symmetry and ℤ^d lattice translation symmetry, then one can define a real number ν, called the filling which describes the average charge per unit cell in the ground state. In general there is a matching condition between the fractional part of ν and properties of the low-energy theory. An example of such a constraint is Luttinger's theorem, which we already mentioned above; in the case of lattice translation symmetry, the precise statement is that in a spinless Fermi liquid,
𝒱_F V_unit/(2π)^d = ν mod 1,
where 𝒱_F is the volume in momentum space enclosed by the Fermi surface, and V_unit is the volume of a translation unit cell.
A particularly interesting case is when the filling ν can be tuned to be an irrational number; we call such systems “compressible”. Compressibility implies very strong constraint on the low-energy physics <cit.>. Specifically, it was argued in Refs. <cit.> that the only way for the IR theory to be compatible with compressibility in spatial dimension d > 1 is that either there must be an emergent higher-form symmetry, or there must be an infinite-dimensional emergent symmetry group. The former possibility is realized in superfluids where the charge U(1) is spontaneously broken and there is an emergent (d-1)-form symmetry. The latter possibility is realized in Fermi liquid theory, where in the IR theory the charge at every point on the Fermi surface is separately conserved, corresponding to an infinite-dimensional symmetry group.
An empirical observation that one can make is that all metals, including non-Fermi liquids seem to be compressible. Therefore, in seeking to identify a plausible IR theory for a metal, it is reasonable to demand that it should be compatible with compressibility. In particular, we can consider systems in which the compressibility is activated in the same way as in Fermi liquid theory, through an infinite-dimensional symmetry group (which for simplicity, we will assume takes the same form as in Fermi liquid theory). Such IR theories were referred to in Ref. <cit.> as “ersatz Fermi liquids”. Thus, we arrive at the main goal of this paper: to formulate a holographic model of an ersatz Fermi liquid.
What we will see is that such an approach indeed allows us to obtain a holographic model that seems to have physically reasonable properties for a metal, more so than previous holographic models. Moreover, a key advantage of our model is that unlike previous holographic models, it explicitly builds in a Fermi surface (that satisfies Luttinger's theorem).
§ REVIEW: ERSATZ FERMI LIQUIDS
§.§ Emergent symmetry, conservation laws, and 't Hooft anomaly
Fermi liquids, and hence, by definition, ersatz Fermi liquids, have an infinite-dimensional emergent symmetry group, which, in d=2 spatial dimensions, we call LU(1) <cit.>. It is an example of what mathematicians call a “loop group”. Specifically, LU(1) is the group comprising all smooth functions from the circle S^1 into U(1). [The group law applies pointwise, i.e. if f,g ∈LU(1) are functions from S^1 into U(1), then (f · g)(s) = f(s) g(s), where the right-hand side refers to multiplication in U(1)]. Roughly, the fact that LU(1) is an emergent symmetry reflects the fact that the charge at every point on the Fermi surface is individually conserved – in Fermi liquid theory this is attributed to the absence of quasiparticle scattering (that is, the interactions that would lead to such scattering are irrelevant in the RG sense). The circle S^1 represents the Fermi surface. In this paper we will parameterize the circle, and hence the Fermi surface, by a coordinate θ (all of the statements we make will hold for an arbitrary parameterization). Notice that LU(1) contains a U(1) subgroup comprising the constant functions; we can identify this with the microscopic U(1) charge conservation symmetry.
The charges of LU(1) correspond to irreducible representations, which, since the group is Abelian, are 1-dimensional. Such irreps can be labelled by real-valued distributions[That is to say, real-valued functions, except that we also allow proper distributions such as delta functions.] N(θ), such that an element f ∈LU(1) acts as a phase factor
exp( i ∫ f(θ) N(θ) dθ),
where here we view the U(1) target of elements of LU(1) as ℝ/(2πℤ). The fact that f(θ) has a mod 2π ambiguity requires us to impose the condition that ∫ N(θ) dθ is an integer to ensure that the phase factor eq:irrep is well-defined. Physically, N(θ) can be interpreted as the charge distribution on the Fermi surface, such that ∫ N(θ) dθ is the total U(1) charge. Going beyond the 1-dimensional irreps, we can define an operator-valued distribution N̂(θ) such that an element f ∈LU(1) acts on the whole Hilbert space as
exp( i ∫ f(θ) N̂(θ) dθ).
We can (roughly) think of N̂(θ) as the generators of the action of LU(1) on the Hilbert space, and viewed as observables they measure the (conserved) charge distribution on the Fermi surface. In the rest of the paper we will drop the hats on N̂(θ).
In Fermi liquid theory the emergent LU(1) symmetry has a so-called 't Hooft anomaly, meaning that there is an obstruction to gauging the symmetry. This is reflected in the fact that when a background gauge field of the LU(1) symmetry is applied, the LU(1) charge can become non-conserved. In order to explain this,
let us first define what we mean by an LU(1) gauge field. In general, a gauge field for a continuous group on a space-time M is a covariant vector field on M valued in the algebra of infinitesimal transformations of the group. Concretely, given the definition of LU(1), this suggests that an LU(1) gauge field on M is a family A_μ(θ) of covariant vector fields on M that smoothly depends on the parameter θ∈ S^1, with the gauge transformation
A_μ(θ) → A_μ(θ) + ∂_μλ(θ),
In fact, however, as pointed out in Ref. <cit.>, this is not the entire story – there is an additional wrinkle in the definition of gauge field that applies only to infinite-dimensional groups such as LU(1). One actually needs to include an additional component A_θ that transforms under gauge transformations as A_θ→ A_θ + ∂_θλ. In Fermi liquid theory, where one can talk about quasiparticles that are localized both in space and in momentum space, the spatial components of A describe the quantum phase accumulated as the quasiparticle is moved in space, while A_θ describes the quantum phase accumulated as the quasiparticle is moved along the Fermi surface in momentum space. [One can argue that A_θ is still a necessary ingredient for an LU(1) gauge field even beyond Fermi liquid theory.]
We can now make the observation that an LU(1) gauge field on M looks formally equivalent to a U(1) gauge field on a higher-dimensional space M × S^1 (one should be careful, however, about taking this analogy too far, as we will see later).
We can now state the nature of a 't Hooft anomaly of the LU(1) symmetry <cit.>. We can introduce the LU(1) current j^μ, which is a contravariant vector field on M × S^1. For example, one could define j^μ = δ S/δ A_μ, where S is the action of the system coupled to the LU(1) gauge field [in particular, in principle j includes a component j^θ; we discuss this further below.] The time component j^t can be viewed as the spatial density of the N(θ) defined above.
Then the anomaly equation takes the form
∂_μ j^μ = m/8π^2ϵ^μνλσ [∂_μ A_ν] [∂_λ A_σ].
Note that in these equations, we allow the greek-letter indices to vary not just over the directions of space-time, but also over the θ coordinate (hence how we are able to use the 4-dimensional Levi-Civita symbol ϵ, even though we began with a 3-dimensional space-time). The anomaly coefficient m is quantized to be an integer through general arguments; in single-component Fermi liquid theory it takes the values ± 1 depending on an (arbitrary) choice of orientation of the Fermi surface. Observe that this anomaly equation has the same structure as for a U(1) gauge field in a 4-dimensional space-time; for a LU(1) gauge field in 3-dimensional space-time, the Fermi surface plays the role of an “extra dimension”.
Finally, let us return to the issue of the j^θ component of the current. In order to really be able to say that the system has an LU(1) symmetry, j^θ must obey some strong restrictions. If j^θ is nonzero it implies a flow of charge along the Fermi surface. In general this will imply that the total charge N(θ) at each point on the Fermi surface is no longer conserved individually. Therefore, a system with LU(1) symmetry must obey the property that j^θ is identically zero. An exception to this could occur in the presence of a magnetic field; for example it is well known that in Fermi liquid theory, a magnetic field induces a precession of quasiparticles along the Fermi surface, which would correspond to j^θ≠ 0. This allows for the N(θ) to become non-conserved in the presence of a magnetic field. This may not be too shocking given the 't Hooft anomaly, but we note that in this case the non-conservation actually arises from the ∂_θ j^θ term in eq:conservation_eqn, not the right-hand side of eq:conservation_eqn as one might have expected.
§.§ Fermi surface and phase space magnetic field
We can write the anomaly equation eq:conservation_eqn as
∂_μ j^μ = m/(2π)^2 [ B F_θ t + ϵ^ij E_i F_θ j],
where we defined the field strength tensor F_μν = ∂_μ A_ν - ∂_ν A_μ; t denotes the time direction; i and j range over the two spatial directions; and we have defined the magnetic field B = 1/2ϵ^ij F_ij and electric field E_i = F_ti.
If we set E_i and B to be independent of θ, this will correspond to applying a background gauge field for the U(1) subgroup of LU(1). Suppose in particular that we just consider an electric field and set B=0. In that case, it is known that in Fermi liquid theory (in which we can set the anomaly coefficient m=1), the non-conservation of LU(1) charge takes the form
∂_μ j^μ = 1/(2π)^2ϵ^ij E_i ∂_θ k_j(θ),
where the vector 𝐤(θ) denotes the (vector) Fermi momentum as a function of position on the Fermi surface.
In order for eq:expanded_anomaly_eqn and eq:fermi_liquid_anomaly to agree, it appears that we must identify
F_θ j = ∂_θ k_j(θ).
The necessity of this identification was previously pointed out in Ref. <cit.> (and was somewhat implicit in Ref. <cit.>). A nice interpretation was suggested in Ref. <cit.>: since moving in θ space amounts to moving along the Fermi surface, and the Fermi surface lives in momentum space, eq:phase_space_magnetic_field reflects the non-commutativity between position and momentum coordinates, which can be encoded by a “magnetic field” in phase space.
We will take it for granted that the identification eq:phase_space_magnetic_field will continue to hold even beyond Fermi liquid theory, in any ersatz Fermi liquid.
Indeed, in a general ersatz Fermi liquid we can simply define Fermi surface in such a way that eq:phase_space_magnetic_field is identically satisfied. More precisely, suppose we consider a translationally invariant configuration of the system; in that case, we should be able to choose a gauge such that ∂_i A_θ = 0. Then eq:phase_space_magnetic_field tells us that ∂_θ [A_i(θ) - k_i(θ)]= 0, so we can define the Fermi surface momentum (up to an overall additive constant) according to k_i(θ) = A_i(θ).
§.§ Luttinger's theorem and compressibility
Suppose that our ersatz Fermi liquid, with emergent LU(1) symmetry, describes the emergent IR physics of a microscopic system that has a global U(1) symmetry, as well as either a lattice or continuous translation symmetry. Then it turns out that there is a “UV-IR” matching condition that one can derive between the properties of the IR theory and the microscopic density of the charge of the global U(1) symmetry <cit.>. In the context of Fermi liquid theory, this is known as Luttinger's theorem. In the case of continuous translation symmetry, the relation takes the form
ρ = m 𝒱_F/(2π)^2,
where ρ is the microscopic charge density, and 𝒱_F is the volume enclosed by the Fermi surface [recall from the previous subsection that in a general ersatz Fermi liquid, the Fermi surface can be defined in terms of the background LU(1) gauge field.] For a system with lattice translation symmetry, the statement instead takes the form
ν = m 𝒱_F V_unit/(2π)^2 [mod 1],
where V_unit is the volume of a translation unit cell, and the dimensionless number ν, known as the “filling”, is the average charge per unit cell.
From the above relations, we see that in the case of continuous microscopic translation symmetry, an ersatz Fermi liquid is compatible with a nonzero microscopic charge density; while in the case of discrete microscopic translation symmetry, an ersatz Fermi liquid is “compressible”, in the sense that the microscopic filling ν can be continuously tuned simply by varying the Fermi surface volume.
§.§ Hydrodynamics
The infinitely many conservation laws of an ersatz Fermi liquid have very strong consequences for the dynamics. In particular, Ref, <cit.> studied the dynamics in the “hydrodynamic” regime. In this regime one assumes that the system is locally in thermal equilibrium at each point in space and time. Here the concept of “thermal equilibrium” needs to take into account all the conserved quantities. Thus, the local equilibrium state will depend on the local densities of the conserved quantities N(θ), which might vary as a function of space and time. Hydrodynamics gives an equation of motion for how these densities evolve in time.
Ref. <cit.> showed that, at zero-th order in a gradient expansion, and working to linear order in the perturbation from the global equilibrium state, one obtains, in a general ersatz Fermi liquid, an equation of motion that depends only on certain thermodynamic susceptibilities ξ(θ,θ') of the conserved charges N(θ). Let us focus on the case where these susceptibilities contain only a contact term, i.e. ξ(θ,θ') = v_F(θ) δ(θ - θ'). This certainly need not be true in general (and is not even true in Fermi liquid theory when the Landau interactions are nonzero), but we will see later that it actually is what happens in our particular holographic model. In this case, the equations of motion of Ref. <cit.> reduce to
∂ n(θ)/∂ t + 𝐯_F(θ) ·∇ n(θ) = m/(2π)^2𝐄·𝐰(θ),
where 𝐄 is an applied background electric field, and we defined the vectors 𝐰(θ) and 𝐯_F(θ) according to w^i(θ) = ϵ^ij∂_θ k_j(θ) [recall that 𝐤(θ) is the Fermi momentum vector], and 𝐯_F(θ) = v_F(θ) 𝐰(θ) / |𝐰(θ)|. This happens to be (if we set m=1) the same equations of motion that one would get in a Fermi liquid with the Landau interactions set to zero.
The fact that the derivation of Ref. <cit.> was based on hydrodynamics, which in turn is based on the assumption of local thermal equilibrium, suggests that there could in principle be some limitations to the validity of eq:hydrodynamic_eqn. In particular, if we consider dynamics at frequency ω, hydrodynamics does not necessarily apply when ω is larger than the inverse local thermalization time, for which ∼ T is a good guess at low temperatures in a strongly coupled system. Thus, in principle we should only expect eq:hydrodynamic_eqn to hold when ω≪ T. However, in Fermi liquid theory eq:hydrodynamic_eqn actually holds without any such restriction; we will see that this also ends up being the case in our holographic model.
§ A HOLOGRAPHIC MODEL OF AN ERSATZ FERMI LIQUID
§.§ The bulk action
We refer the reader to Ref. <cit.> for an accessible introduction to the basic framework of holographic models.
In this paper, we wish to find a bulk gravitational theory that is holographically dual to a boundary QFT that has a global LU(1) symmetry. According to the standard holographic dictionary, the way to achieve this is clear: we need the bulk theory to have a dynamical LU(1) gauge field.
As mentioned in Section <ref>, an LU(1) gauge field on a 4-dimensional space-time M can in a certain sense be thought of as a vector field A on the 5-dimensional space M × S^1.
We emphasize, however, that the S^1 should not be thought of as an additional, compactified dimension of space-time, such that, for example, the metric in the gravitational theory obeys the Einstein equations for the five-dimensional space-time. For one thing, if we formulated the holographic model in this way, it would imply that the boundary theory lives on the 4-dimensional space-time ∂ M × S^1. In particular, it would be possible to define a local energy density for the boundary theory on ∂ M × S^1. By contrast, in a metal with a Fermi surface, in general the Hamiltonian will couple different points on the Fermi surface without any regard to locality, so there is no such notion of a local energy density on ∂ M × S^1 – only a local charge density.
These considerations suggest that we should instead identify the 4-dimensional manifold M as the space-time manifold, and require that the metric in the bulk gravitational theory obeys the Einstein equations on M.
A related subtlety is that we need to make sure that the gauge field in the bulk really can be interpreted as an LU(1) gauge field on M, rather than a U(1) gauge field on M × S^1. According to the discussion at the end of Section <ref>, this means that the action must have the property that j^θ = δ S/δ A_μ is identically zero (at least in the absence of a magnetic field). If we just wrote down the Maxwell action for a U(1) gauge field on M × S^1, it would not satisfy this property.
Instead, we will employ a Maxwell action of the form
S_Maxwell = -1/4∫_M × S^1√(-g)√(g_θθ) f_μν f^μν d^4 x dθ,
f_μν = ∂_μ a_ν - ∂_ν a_μ,
where here the Greek letters range over the 4 dimensions of the space-time manifold M, but not over the θ direction (we will follow this index convention throughout the rest of the paper).
Here g with no subscripts refers to the metric on the 4-dimensional space-time, which obeys the Einstein equations (and √(-g) is the square-root of its determinant). We have also found it convenient to introduce g_θθ, which is the component of the metric along the θ direction. This does not obey the Einstein equations; instead we will just assume that g_θθ is some function of θ (independent of the space-time coordinate), which forms part of the parameters of the model. We have also absorbed an overall coupling constant that could appear in front of eq:maxwell into g_θθ.
As a side note, let us remark that there are some intriguing suggestions <cit.> that for a system with a global U(1) symmetry on a space-time ∂ M × S^1, with a 't Hooft anomaly described by eq:conservation_eqn with m≠ 0, the condition j^θ = 0 may in fact be enforced automatically once one applies the “phase-space magnetic field” eq:phase_space_magnetic_field, so that the global symmetry gets upgraded to LU(1) automatically. Ref. <cit.> only considered systems of non-interacting fermions, but if the result does hold more generally, it would suggest that in a holographic model we could just take the gauge field in the bulk theory to be a U(1) gauge field on M × S^1, which would mean we could use the usual Maxwell action for a U(1) gauge field rather than eq:maxwell. We leave exploration of this possibility for future work.
Next, it is also necessary to take into account the 't Hooft anomaly of LU(1). The standard way to implement a 't Hooft anomaly in the dual boundary theory is to include a Chern-Simons term for the bulk dynamical gauge field <cit.>. In particular, the anomaly equation eq:conservation_eqn is obtained at the boundary of the 5D Chern-Simons term
S_CS = m/24π^2∫_M × S^1 a ∧ da ∧ da.
Note that, strictly speaking, the Chern-Simons term is not well-defined on a manifold with boundary, unless one imposes specific boundary conditions. However, the difference in the action between two gauge-field configurations that have the same values on the boundary ∂ M × S^1 is well-defined, since this is equivalent to evaluating the Chern-Simons term on a closed manifold. For our purposes this will mostly be sufficient, but it will cause some difficulties in defining the relation between the bulk fields and the currents in the dual boundary theory, since according to the holographic dictionary, these are defined through variations of the bulk partition function with respect to the boundary values of the gauge field. We return to these issues in Section <ref>.
In summary, the dynamical gauge fields in the bulk are the metric g and the LU(1) gauge field a, and the total action is given by
S[g,a] = S_CS + S_Maxwell + S_EH,
where the Chern-Simons action S_CS and the Maxwell action were defined above, and S_EH is the usual Einstein-Hilbert action for the metric:
S = 1/2κ^2∫_M√(-g)(R + 6/L^2) d^4 x,
where R is the Ricci scalar computed from the metric and -6/L^2 is the cosmological constant.
Note that since the Maxwell action does not depend on a_θ, it is not possible to treat a_θ as a dynamical field in the bulk. Instead, we will just treat it as a fixed background.
§.§ Boundary conditions and identification of the currents in the dual QFT
To properly define the holographic correspondence, one needs to carefully consider the boundary conditions. Let us first observe that the classical equations of motion for the metric admit a solution which is asymptotically AdS_4 near the boundary. We will adopt a coordinate system in which the asymptotic metric can be expressed as
ds^2 = L^2/r^2 (-dt^2 + dx^2 + dy^2 + dr^2),
where the boundary is located at r=0.
Next we need to consider the asymptotic solutions for the LU(1) gauge field a near r=0. Here our task is complicated by the presence of the Chern-Simons term in the action. For example, in the case of a U(1) gauge field in AdS_3 with a Chern-Simons term ∼∫ a ∧ da, understanding the boundary conditions for holography becomes a somewhat involved topic,
see Ref. <cit.>. Fortunately, our task here is easier because in our case (unlike in the case of Maxwell-Chern-Simons in AdS_3), one finds that the solutions have the same asymptotic scaling as r → 0 with or without the Chern-Simons term, namely
a_μ = a^(0)_μ + a^(1)_μ r + ⋯,
although the constraints on the coefficients a^(0) and a^(1) from the equations of motion may differ depending on the presence of the Chern-Simons term.
[To see that the solutions always have the asymptotic form eq:a_asymptotic, just observe that with the metric eq:ads4, the equations of motion do not have any singularity at r=0, hence the solutions must be analytic functions of r at r=0.] This suggests that the holographic dictionary for a bulk Maxwell theory without a Chern-Simons term should simply carry over; that is, we should identify a_μ^(0) as the background gauge field applied in the dual boundary theory, while a_μ^(1) is the expectation value of the current operator in the boundary theory.
To make this argument more precise, first observe that in defining the action of the bulk theory, the asymptotic form eq:a_asymptotic ensures that it will not be necessary to introduce any counterterms on the boundary to cancel divergent contributions at r=0, as is sometimes necessary in defining holographic duality.
However, another difficulty arises from the fact that to properly define the action, we need to define the Chern-Simons term in the presence of boundary, which has a certain ambiguity.
In this paper, we will seek to sidestep the issue in the following way. Suppose we consider two copies of our system, with opposite sign of the anomaly coefficient m, and we impose that the background gauge field A felt by the two copies should be the same. Then the combined system is dual to two copies of the gravitational theory, with opposite signs of the Chern-Simons level m in eq:CS, but with identical boundary values of the bulk gauge field a. Due to the different value of m, the bulk fields will evolve differently in the two copies. But the sum of the contributions to the action from the Chern-Simons terms of the two copies will not suffer from the ambiguity of a single copy, since evaluating this term is equivalent to evaluating the Chern-Simons action on a closed manifold obtained by gluing the two space-time manifolds together at their boundary. The doubled theory is only sensitive to responses of the original theory that are even under changing the sign of the anomaly coefficient m. Observe that in a microscopic lattice model of a metal, acting with a unitary particle-hole (i.e. “charge conjugation”) operator on the microscopic Hamiltonian will lead to an opposite value of m in the low-energy emergent theory without affecting the location of the Fermi surface. Therefore, we expect that any response that is even under such a particle-hole transformation, such as the linear electrical conductivity, will indeed be even under changing the sign of m. Responses that are odd under a particle-hole transformation, and hence under a change of sign of m, cannot be captured by the doubled theory, and would likely require more careful attention to the boundary conditions for the Chern-Simons term.
In any case, let us consider how to identify the currents of the dual boundary theory in the doubled system. First we observe that if we introduce the variation δ a of the gauge field, then by integrating by parts we see that the variation of the Maxwell term eq:maxwell (in one of the copies) takes the form
∫_∂ Md^3 x ∫ dθ√(-g)√(g_θθ) A_μ f^r μ + ∫_M d^4 x ∫ dθ√(-g)√(g_θθ) A_μ∂_μ f^r μ.
If we impose the classical equations of motion, then by definition the second term in eq:maxwell_variation has to cancel the variation of the Chern-Simons term (one can verify that there is no boundary contribution coming from the Chern-Simons term in the doubled theory). Therefore, by taking the functional derivative with respect to A_μ, the current in the doubled theory is just given by sum of the contributions from the first term of eq:maxwell_variation in the two copies, which gives:
j^μ = -δ S/δ A_μ = -√(-g)√(g_θθ) (f_(1)^r μ + f_(2)^r μ) |_r=0
where the subscripts (1) and (2) refer to the fields in the two copies. This suggests that one should identify the current in the undoubled theory (modulo the caveats discussed above) as
j^μ = -√(-g)√(g_θθ) f^r μ|_r=0
(which is the same as it would be in the absence of the Chern-Simons term).
Observe that the classical equations of motion in the bulk, eq:classical_eqs imply that this current obeys the anomalous conservation equation eq:conservation_eqn, with j^θ = 0.
§.§ The equilibrium solution in the bulk
To describe the equilibrium properties of the system, we want to consider the dual QFT with global LU(1) symmetry at zero charge density (recall that if our theory represents the IR effective theory for some UV theory at nonzero charge density, this nonzero charge density is reflected in the emergent symmetry and anomaly of the IR theory, not its charge density).
Moreover, we will switch off all the background LU(1) gauge field, except that we still need to set A_i(θ) = k_i(θ), where the spatial vector 𝐤(θ) represents the Fermi surface momentum. In the gravitational theory this translates into the boundary condition for the bulk gauge field a. Recall that the necessity of including this “phase space magnetic field” was discussed in Section <ref>.
In this case, the solution of the classical equations of motion in the bulk are as follows. Firstly, the AdS_4 metric eq:ads4 holds in the entire space-time, i.e. for all r ≥ 0. Secondly, in the coordinate system in which the metric takes the form eq:ads4, the LU(1) gauge field has components a_x(θ) = k_x(θ), a_y(θ) = k_y(θ) (independently of x,y,r,and t), and the other components are zero. [Note that, while this gauge field has non-trivial gauge curvature F_θ i, from the Maxwell action eq:maxwell one sees that this component of the gauge curvature does not actually contribute to the stress tensor], hence why the AdS_4 metric remains a solution to Einstein's equations.)
The remainder of this paper will be devoted to computing responses of the dual QFT by considering perturbations to the equilibrium solutions. In order to make progress, we will only consider linear responses; this will allow us to linearize the equations of motion about the equilibrium solution.
§ RESULTS
§.§ A preliminary remark: the UV cutoff scale
In this section we will present the results of solving the linearized classical equations of motion in the bulk. There is one point that needs to be kept in mind when interpreting these results, as follows. With respect to a physical lattice model of a metal, the model of an ersatz Fermi liquid that we have constructed is only supposed to be the effective IR theory. This places limitations on the regime in which the results we obtain will be meaningful. Specifically, we should focus on the response at frequency ω, wavevectors 𝐪, and temperature T, such that |ω|,|𝐪|,T are much smaller than some cut-off scale.
As we will see, the solutions that we obtain appear to have a characteristic scale u, where
u ∼|m| |∂_θ𝐤(θ)|/√(g_θθ).
For example, if we assume an isotropic Fermi surface such that 𝐤(θ) = k_F (cosθ, sinθ) and α := g_θθ^-1/2 is independent of θ, then we have
u ∼ |m| α k_F.
Thus, in this paper we will focus on the results in the regime |ω|,|𝐪|,T ≪ u. In other words, our goal will be to characterize the effective field theory that emerges in the deep IR at scales below u.
One could ask whether the results obtained in the holographic model are still meaningful for scales above u. We expect that the Fermi wavevector k_F will place an upper bound on the scales for which the holographic model can be a useful description of the original microscopic lattice model. However,
a condition for the electrodynamics of the bulk theory to be weakly coupled is [(]say in the isotropic case so that eq:isotropic_u holds] that α≪ 1. Therefore, if m ∼ 1 then u ≪ k_F. The holographic model could thus conceivably describe meaningful physics on scales greater than u. However, we will not focus on this regime in the current work.
§.§ Charge responses at zero temperature
The linearized equations of motion for the LU(1) gauge field a obtained from the action eq:total_action do not contain any derivatives with respect to θ. Therefore, they can be solved independently at each θ. Moreover, as the linearized equations of motion for the case of an AdS_4 metric turn out to be a system of ODEs with constant coefficients, they can be solved analytically in a straightforward way. However, as the form of the solution ends up being somewhat complicated in the general case, we will focus on the behavior for |ω|, |𝐪| ≪ u as previously discussed in Section <ref>. In that case, we show in Appendix <ref> that one finds for the currents in the boundary theory in response to applied background gauge field[We have chosen the branch of the square root such that for real ω and 𝐪, we have √(-ω^2 + q_⊥^2) = -i sgn(ω) √(ω^2 - q_⊥^2) when |ω| > |q_⊥|. while we just take the positive square root for |ω| < |q_⊥|.]:
⟨ j^t ⟩ = ⟨ j^⊥⟩ = m|∂_θ𝐤(θ)|/(2π)^2i/ω - q_⊥ E_⊥ - i √(g_θθ)(ω + q_⊥) q_∥/(ω - q_⊥) √( -ω^2 + q_⊥^2 )(E_∥ + B) + ⋯,
⟨ j^∥⟩ = -i √(g_θθ)ω+q_⊥/√(-ω^2+q_⊥^2) (E_∥ + B) + ⋯,
where we defined the electric field E_i = -i(q_i A_t + ω A_i); and the magnetic field B = i(q_x A_y - q_y A_x). Here we have written the spatial components of the vectors in terms of the components perpendicular to (⊥) and parallel to (∥) the Fermi surface: that is,
q_∥ = q_i w^i(θ)/√(w^j(θ) w_j(θ)), etc.
where Roman letter indices such as i take values in the two spatial dimensions, and we have defined w^i(θ) = ϵ^ij∂_θ k_j(θ) as before. To raise and lower spatial indices, we use the unit metric in the coordinate system (x,y) in which the bulk metric takes the form eq:ads4, i.e. the metric ds^2 = dx^2 + dy^2 (which is not the same as the bulk metric evaluated at the boundary, whose components diverge). This is also the metric that we use to evaluate |∂_θ𝐤(θ)| = w^i(θ) w_i(θ) in eq:jtjperp.
In writing Eqs. (<ref>) and (<ref>) we have assumed that the Chern-Simons level m is positive; there are similar equations for m < 0 but with different signs.
Let us be more precise about what we mean by the “⋯” in Eqs. (<ref>) and (<ref>). One can argue from the general structure of the equations of motion (see Appendix <ref>) that the currents in linear response can be written as
⟨ j^μ⟩ = √(g_θθ)𝒥^μν (ω,𝐪,u) A_ν,
where the function 𝒥 depends only on its explicit parameters ω, 𝐪 and u, and we have defined
u = m |∂_θ𝐤(θ)|/(2π)^2 √(g_θθ).
Then we can expand 𝒥^μν in a power series in 1/u:
𝒥^μν(ω,𝐪,u) = ∑_p=-1^∞𝒥^μν_(p)(ω,𝐪) u^-p.
Dropping the “⋯” terms in Eqs. (<ref>) and (<ref>) corresponds to keeping only the p=0 and p=-1 terms in this expansion.
The first term in eq:jpar and the second term in eq:jtjperp correspond to p=0 in eq:u_expansion, while the first term in eq:jtjperp corresponds to p=-1.
In the language of the renormalization group, u serves as a UV cutoff for an effective field theory, and the p>0 terms will describe the effect of irrelevant operators, corresponding to the fact that they go to zero as ω/u, 𝐪/u go to zero. The fact that there is a p=-1 term as well as a p=0 term is likely analogous to the following statement in Fermi liquid theory: when one defines the appropriate RG scaling, the effective theory contains a parameter k_F / Λ, where Λ is the momentum cutoff scale, which flows to infinity under the RG flow. In this sense, Fermi liquid theory is not, strictly speaking, a fixed-point under RG [in which case one would have expected only the p=0 term to be present in the expansion eq:u_expansion] but rather a one-parameter trajectory. As this behavior is tied to the fact that the low-energy excitations live on the Fermi surface rather than at zero momentum, one should expect a similar property to be true in our holographic model as well.
If we keep only the leading-order term, i.e. the p=-1 term in eq:u_expansion, in which case only the first term in eq:jtjperp remains, then this exactly agrees with the result that would obtain from the hydrodynamic equation of motion eq:hydrodynamic_eqn, with the Fermi velocity v_F equal to the speed of light c in the bulk theory (set to 1 in our units). In particular, the pole at ω = q_⊥ indicates a gapless propagating mode with velocity v_F = 1, but one which is chiral and directional since it can only move in one direction, perpendicular to the Fermi surface.
In particular, as we noted in Section <ref>, this is the same result that would obtain in Fermi liquid theory, with the Landau interactions set to zero. [However, one should not view this result as suggesting that our theory is somehow “weakly coupled” like Fermi liquid theory, because as described in Section <ref>, the equation of motion eq:hydrodynamic_eqn can be viewed as a general consequence of hydrodynamics, taking into account the conserved quantities associated with the LU(1) symmetry.]
Meanwhile, the p=0 terms in the expansion have no analog in Fermi liquid theory and reflect non-Fermi liquid behavior.
Let us consider some particular limits of the general expressions Eqs. (<ref>) and (<ref>). First of all, we compute the static susceptibility χ(θ,θ') for the N(θ) charges, which is defined by
χ(θ,θ') := lim_𝐪→ 0lim_ω→ 0δ⟨ j^t(θ') ⟩/δ A_t(θ) (ω, 𝐪).
From eq:jtjperp we find
χ(θ,θ') =
m |∂_θ𝐤(θ)|/(2π)^2δ(θ - θ').
In particular, we find that the total charge compressibility (i.e. the susceptibility of the total U(1) charge) is given by
χ = ∬χ(θ,θ') dθ dθ' = m/(2π)^2ℓ_F > 0,
where ℓ_F = ∫ |∂_θ𝐤(θ)| dθ is the total length of the Fermi surface. The condition χ > 0 is often used as a definition of “compressibility”. In general this need not be equivalent to the definition of compressibility we gave in Section <ref> and in the introduction, but in this model we find that the system is compressible in both senses.
Another interesting case to look at is the regime of optical conductivity, where we set B=0 and then take the limit of 𝐪→ 0 at fixed ω. Then Eqs. (<ref>) and (<ref>) (upon dropping the “⋯”) become
⟨ j^t ⟩ = ⟨ j^⊥⟩ = m|∂_θ𝐤(θ)|/(2π)^2i/ω E_⊥,
⟨ j^∥⟩ = √(g_θθ) E_∥.
Recall that these are the contributions to the currents from a particular point on the Fermi surface. To get the total current, we have to integrate over the whole Fermi surface; we assume that the electric and magnetic fields 𝐄 and B are background gauge fields of the U(1) symmetry, which is to say that they are independent of θ. One finds that the total charge density is zero, while the total current density is given by
⟨ j^i ⟩ = σ^ij(ω) E_j,
with the conductivity tensor σ(ω) of the form
σ(ω) = 𝒟i/ω + σ_inc,
with the “Drude weight”
𝒟^ij = m/(2π)^2∫w^i(θ) w^j(θ)/|𝐰(θ)| dθ,
and
the frequency-independent “incoherent conductivity”
σ_inc^ij = ∫v^i(θ) v^j(θ)/|𝐰(θ)|^2√(g_θθ) dθ,
where we defined w^i(θ) = ϵ^ij∂_θ k_j(θ) and v_i(θ) = ∂_θ k_i(θ), and we use the unit metric to raise and lower spatial indices as described below eq:some_stuff.
Note that, as can be seen from Eqs. (<ref>) and (<ref>), the two terms appearing in the conductivity eq:sigma_omega have physically different origins – the first term comes from the current that, at each point of the Fermi surface, flows perpendicular to the Fermi surface; while the second term comes from the current that flows parallel to the Fermi surface. In a Fermi liquid the current only ever flows perpendicular to the Fermi surface, so this is another reflection of non-Fermi liquid behavior. Also, it is apparent from the solutions described in Appendix <ref> that the first term, which is non-dissipative, arises from a bulk mode which decays exponentially with r away from the boundary, while the second term, which is dissipative, arises from a mode which does not decay exponentially with r. This makes sense because in the bulk theory one can think of the energy lost in a dissipative process as falling into a black hole located (in the limit of zero temperature) at r=∞, so a mode that decays exponentially with r never reaches r=∞ and hence will always be non-dissipative.
§.§ Nonzero temperature
To describe the model at nonzero temperature, we just need to replace the time direction of space-time by a compact Euclidean direction <cit.>. As in Section <ref>, one finds that in the equilibrium state, the LU(1) gauge field does not enter into the equations of motion for the metric. As a result, the AdS_4 metric eq:ads4 will simply be replaced by a thermal metric that has the same form as for a theory dual to a strongly-coupled (2+1)-D CFT, namely
ds^2 = L^2/r^2[ f(r) dτ^2 + 1/f(r) dr^2 + dx^2 + dy^2 ],
with
f(r) = 1 - ( r/r_+)^3,
and r_+ determined in terms of the temperature T by
r_+ = 3/4π1/T.
This reduces to asymptotic (Euclidean) Ads_4 near the boundary, r → 0, but the space-time ends at r=r_+, corresponding to a Euclidean version of a black hole event horizon.
The equations of motion for the LU(1) gauge field with the metric eq:ads_thermal are no longer analytically solvable. However, we expect that the p=-1 term in the expansion eq:u_expansion [that is, the Fermi-liquid-like term in eq:jtjperp] will remain roughly unchanged for T ≪ u. The reason is that this term arises from a mode that exponentially decays in the bulk for r ≳ u. Meanwhile, the thermal metric eq:ads_thermal only differs appreciably from the Euclidean version of the zero-temperature metric when r ≳ T^-1. Therefore, if T ≪ u the mode should be unaffected by the nonzero temperature.
By contrast, the subleading contributions will likely be affected by nonzero temperature. Let us focus specifically on the optical conductivity. The “Drude” part of the optical conductivity, i.e. the first term in eq:sigma_omega, should be unaffected for T ≪ u for the reasons described above. Meanwhile,
one can check that if one sets 𝐪=0, then the a_∥ component of the gauge field decouples from a_⊥ and a_t, and obeys the same equation of motion as U(1) gauge field with a Maxwell action. Since it is the a_∥ component that is responsible for giving rise to the σ_incoherent term in eq:sigma_omega, therefore this σ_incoherent will have the same dependence on ω and T as in a holographic model of a (2+1)-D CFT at zero charge density, in which the bulk theory just has a U(1) gauge field with the Maxwell action and the metric eq:ads_thermal. One can show <cit.> that in fact, this always has the form
σ_incoherent(ω,T) = σ_0,
i.e. a constant independent of ω and T. This, however, is due to the special property of the self-duality of the Maxwell action and in general will not be the case if one introduces additional terms in the bulk action <cit.>. But more generally, the conductivity will obey the scale-invariance property of a quantum critical point in two spatial dimensions, i.e.
σ_incoherent(ω,T) = f(ω/T),
for some scaling function f.
§ INTERPRETATION: WHAT IS THE GRAVITATIONAL THEORY DUAL TO?
A difficulty with holographic models is that if, as we are doing here, one simply postulates an action for a bulk gravitational theory, it may be rather obscure what is the nature of the dual QFT. Nevertheless, in this instance we feel we are able to make a fairly good guess. The key observation is that, as we noted in Section <ref>, the metric takes the AdS_4 form eq:ads4 throughout the entire bulk space-time, not just asymptotically near the boundary. This is the same form that one would expect for a quantum field theory that is dual to a strongly coupled (2+1)-D CFT (at zero charge density) in some large-N limit. However, the charge response that we found in Section <ref> does not take the form that one would expect in such a CFT. On the other hand, if we compute, for example, the entropy density as a function of temperature, then the entropy density will be dominated by the gravitational contribution coming from the black hole in the metric eq:ads_thermal, and therefore will have the same scaling with temperature as in such a CFT.
This motivates us to make the following proposal for the dual QFT, in the case where we set the Chern-Simons level m (and hence, the anomaly coefficient of the dual QFT) equal to one: it corresponds to the IR effective theory resulting from coupling a spinless single-component Fermi liquid to a large-N strongly coupled CFT. The fluctuations of the CFT will destroy the quasiparticles of the Fermi liquid, leading to a non-Fermi liquid, while still (one presumes) preserving the global LU(1) symmetry, at least in an emergent sense. Meanwhile, since m ∼ 1 but N ≫ 1, the Fermi liquid does not have enough degrees of freedom to significantly backreact on the CFT, corresponding to the statement in the dual theory that the bulk metric in equilibrium is unaffected by the LU(1) gauge field. Furthermore, the entropy density of the CFT will scale with some power of N, and therefore in the large-N limit will dominate over any contribution from the Fermi surface.
The picture described above is also very reminiscent of the “semi-holographic” picture <cit.> that was developed in the context of some previous holographic models. In these models there is a small Fermi surface that does not satisfy Luttinger's theorem on its own. It was argued that the physics can be understood in terms of a Fermi liquid with the small Fermi surface coupled to a strongly coupled sector that contains most of the charge. By contrast, in the picture described above, the Fermi surface does contain all of the charge and the strongly coupled sector is at zero charge density.
One can compare this picture with other routes to obtaining non-Fermi liquids. For example, in Hertz-Millis type theories <cit.>, one couples a Fermi liquid to a free boson rather than a strongly coupled CFT; all the strong-coupling physics in such theories comes from the boson-fermion interactions.
Finally, let us also remark on the distinction with the SYK-inspired “large-N random-flavor” models described in Ref. <cit.>, which are large-N deformations of Hertz-Millis models. These seem to be natural candidates to have a holographic dual; for example, it has been argued that these models exhibit maximal quantum chaos in the large-N limit <cit.>. However, the holographic model described in this paper cannot be dual to these theories. For one thing, in the random-flavor models one sends the number of fermion species [and hence, the anomaly coefficient m for the LU(1) symmetry] to infinity. Meanwhile, in our holographic model we are free to just set m=1. Moreover, in the random-flavor models, in general the LU(1) charges will always have diverging susceptibilities in certain channels <cit.>, while in our holographic model the susceptibility remains finite, see eq:chi_theta. Finally, we note that in these models one does not expect to have any current flowing in the direction parallel to the Fermi surface in the fixed-point theory <cit.>, in contrast to what we found in Section <ref>.
§ ENTANGLEMENT ENTROPY AND CHARGE FLUCTUATIONS
A famous property of Fermi liquid theory <cit.> in d spatial dimensions is that the entanglement entropy in the ground state in a spatial region M scales like ∼ L^d-1log L, where L is a characteristic length scale of M; thus, the usual area law for entanglement entropy is violated logarithmically. One might ask whether our holographic model obeys the same property.
In holography, it is believed <cit.> that if the gravitational theory is sufficiently weakly coupled, such that one can ignore quantum fluctuations of the area, the entanglement entropy of the dual QFT in a spatial region M is given by
S(M) = 2π/κ A(𝒳) + S_ent(𝒳),
where κ is the gravitational constant appearing in the Einstein-Hilbert action eq:einstein_hilbert; 𝒳 is a codimension 1 surface in an equal-time slice of the bulk space-time, such the boundary of 𝒳 coincides with the boundary of M; A(𝒳) is the area of 𝒳 computed according to the metric of the bulk gravitational theory; and S_ent(𝒳) is the entanglement entropy of the bulk quantum fields in the region delimited by the surface 𝒳.
One is supposed to choose the extremal surface, i.e. the surface which minimizes the right-hand side.
In order for the bulk theory to be weakly coupled, one is supposed to send κ→ 0. Therefore, in this limit, the first term of eq:entropy_formula will dominate and one recovers the so-called “Ryu-Takanagi” formula <cit.>. In this limit, the entanglement entropy is solely determined by the minimal area surfaces in the gravitational theory. Since in our model, with d=2, the metric takes the same form eq:ads4 as in theories dual to a (2+1)-D CFT, it follows that the contribution to the entanglement entropy coming from the first term of eq:entropy_formula will obey the area law, S(M) ∼ L = L^d-1.
However, it is still possible, and indeed we believe very likely, that there will be a ∼ L^d-1log L contribution from the entanglement entropy coming from the second term in eq:entropy_formula and in particular from the entanglement of the bulk LU(1) gauge field. [Note that this would imply that κ→ 0 and L →∞ limits do not commute for the entanglement entropy], This is consistent with the picture of Section <ref>, in which one indeed expects the fermion contribution to the entanglement entropy to be subleading in 1/N compared to the contribution from the strongly coupled QFT. We will not attempt to compute this contribution to the entanglement entropy in the current work. Instead, we will consider a related quantity, namely the charge fluctuations.
Let Q_M be the operator that measures the total U(1) charge in the region M. Then we can consider the variance (Δ Q_M)^2 := ⟨ Q_M^2 ⟩ - ⟨ Q_M ⟩^2. In Fermi liquid theory, it turns out <cit.> that (Δ Q_M)^2 ∼ L^d-1log L. This result tells us something about the correlations between M and its complement, because at zero temperature the fluctuation of the total charge of the ground state is zero, so (Δ Q_M)^2 > 0 shows that the region M and its complement must be correlated. Indeed, the fact that the charge fluctuations have the same scaling as the entanglement entropy suggests that the correlations between M and its complement, which the entanglement entropy measures, are dominated by the charge fluctuations. Heuristically, one can view the fact that charge fluctuations grow faster than area law as related to the fact that (clean) Fermi liquids have zero DC resistivity in the limit of zero temperature, so it is very easy for the charge to “slosh around”, as opposed to being bound locally in place as it would be in an insulator.
To compute the charge fluctuations in our holographic model, we can use the fluctuation-dissipation theorem to express the connected correlator ⟨ n(𝐪) n(-𝐪) ⟩_c [or more generally, the θ-resolved correlator ⟨ n(𝐪,θ) n(-𝐪,θ') ⟩_c] in terms of the retarded Green's function G^R_n(𝐪,θ) n(-𝐪,θ') (ω), which can be derived from the results in Section <ref>. In the spirit of the renormalization group, the leading contribution to the equal-time correlator as 𝐪→ 0 [and hence, the leading contribution to (Δ Q_M)^2 as L →∞] should come from the most relevant operator. Therefore, we will keep only the p=-1 term in the expansion eq:u_expansion. Observe that this term has exactly the same form as one would find in a non-interacting Fermi gas. Therefore, one expects to the get the same result for (Δ Q_M)^2 as in a non-interacting Fermi gas.
In a non-interacting Fermi gas, it has been shown that the coefficient of L^d-1log L can be obtained exactly and has an elegant geometric expression <cit.> . Suppose that M is obtained by scaling a region Γ⊆ℝ^d by a factor of L. Then one finds that
(Δ Q_M)^2 = λ_Γ L^d-1log L + o(L^d-1log L),
with[Ref. <cit.> has an additional factor of log 2 in this formula, but this is presumably an error; it does not appear in subsequent papers on the topic <cit.>.]
λ_Γ = m/(2π)^d+1∫_∂Γ dA_x ∫_ℱ dA_k |𝐧_x ·𝐧_k|.
where m is the multiplicity of the Fermi surface (i.e. the number of bands which have a Fermi surface at the same location), ∫_∂Γ dA_x and ∫_ℱ dA_k denote surface integrals, ℱ is the Fermi surface in momentum space, and 𝐧_x and 𝐧_k are the local unit normal vectors to the respective surfaces. We show in Appendix <ref> that Eqs. (<ref>) and (<ref>) are indeed precisely what we get from the retarded Green's function computed in Section <ref>, keeping only the p=-1 term in the expansion eq:u_expansion.
In non-interacting Fermi gases, there are stronger results one can show regarding charge fluctuations. In particular <cit.>, all the higher cumulants of Q_M fail to pick up any ∼ L^d-1log L contribution and hence are suppressed relative to the variance (Δ Q_M)^2 as L →∞. In other words, the charge fluctuations obey an approximately Gaussian distribution as L →∞.
It would be interesting to verify whether or not this holds in our holographic model. This would require computing nonlinear responses.
§ A VARIANT MODEL
In this section we will briefly consider the properties of a variant of our original model that seems to have some very interesting properties. We will leave further investigation for future work. The idea is we take the Fermi surface metric g_θθ that appears in the Maxwell Lagrangian to determined dynamically by the LU(1) gauge field according to
g_θθ = 1/K^2 f_θμ f_θ^μ,
where f_θμ = ∂_θ a_μ - ∂_μ a_θ, μ is summed over the space-time indices, and K is some constant with dimensions of inverse length (in order for the bulk electrodynamics to be weakly coupled, we should have K ≪ k_F). Observe that we do not having very much freedom in choosing eq:generalized_gthth; for example, if we raised the right-hand side to some power other than 1, it would no longer satisfy the property that the Maxwell action is invariant under reparameterizations of θ, as it ought to be since θ is an arbitrary parameterization of the Fermi surface.
In the linearized theory, one can simply substitute the equilibrium value of the gauge field, as described in Section <ref>, into eq:generalized_gthth. However, g_θθ will still have a non-trivial dependence on the coordinate r, since one has to use the bulk space-time metric to raise the μ index in eq:generalized_gthth.
Let us consider the implications of imposing eq:generalized_gthth.
For generality, we can also consider a general spatial dimension d for the boundary theory, in which case the Fermi surface should be taken to be some (d-1)-dimensional manifold ℱ.
Thus, in our bulk Langrangian we include a 2d+1-dimensional Chern-Simons term
m ∫_ℱ× M A (dA)^d
(up to some normalization factor). We can also generalize eq:generalized_gthth to higher dimensions according to
g_ab = 1/K^2 f_a μ f_b^μ,
where μ is again summed over space-time indices, and a and b are indices of a coordinate chart for ℱ. Then we can generalize the Maxwell action eq:maxwell by replacing g_θθ with the square-root of the determinant of the Fermi surface metric eq:more_generalized_gthth.
We again take the equilibrium value of the gauge field to be given by a_i(θ) = k_i(θ), where θ is now a point on ℱ.
We choose a metric of the form
ds^2 = L^2( -𝒞/r^2z dt^2 + 1/r^2(dx_1^2 + ⋯ + dx_d^2 + dr^2) ).
Here for generality we have introduced a dynamical critical exponent z, which reflects the possibility for space and time to scale differently. eq:zmetric does not obey the vacuum Einstein equation for z≠ 1, so obtaining such a metric requires one to introduce additional fields in the bulk as usual <cit.>.
The linearized equations of motion in this version of the model are more intractable to solve analytically than they were in Section <ref>. Therefore, we will focus only on some special cases. In particular, we will set 𝐪=0. Then the equations of motion separate into two decoupled systems: one for the components of the gauge field parallel to the Fermi surface, and one for a_t and the component perpendicular to the Fermi surface. With the metric eq:zmetric, the equations of motion for a_⊥ and a_t take the same form as they would for a U(1) gauge field in 3 space-time dimensions with the metric
ds^2 = L^2 ( -𝒞/r^2z dt^2 + 1/r^2(dx^2 + dr^2) ).
regardless of the spatial dimension d we started with, where the action for the U(1) gauge field in 3 space-time dimensions contains a Maxwell term and a Chern-Simons term.
In particular, the Chern-Simons term modifies the asymptotic scaling as r → 0 <cit.>. Therefore, we cannot assume, as we did for the previous version of the model, that we can simply adopt the same boundary conditions and holographic dictionary in a pure Maxwell theory. Instead one will need to carefully study the boundary conditions on order to set up the holographic duality; we leave this for future work.
Meanwhile, the equations of motion for the components of a parallel to the Fermi surface take the same form as they would for a U(1) gauge field in 3 space-time dimensions with the metric eq:3metric, with a Maxwell term in the action but no Chern-Simons term. Thus, we conclude that the optical conductivity will have the same form as for a zero-density strongly coupled system in one spatial dimension with dynamical critical exponent z. Therefore, the conductivity as a function of frequency and temperature will have a contribution that takes the form[Technically the exact correspondence with the one-dimensional system only holds at zero temperature; nevertheless one can still argue that the scaling eq:modified_scaling will hold.]
σ(ω,T) = T^-1/z f(ω/T),
where f is a scaling function. In particular, the DC part of eq:modified_scaling will scale as ∼ T^-1/z. We can compare this with the ∼ T^(d-2)/z scaling expected in a zero-density strongly coupled system in d spatial dimensions. If we parameterize the conductivity scaling as ∼ T^(d-2-θ)/z, where θ is a hyperscaling violation exponent, then what we see is that θ = d-1, the dimension of the Fermi surface. Indeed, it has previously been suggested <cit.> that in systems with a Fermi surface, the hyperscaling violation exponent should be equal to the dimension of the Fermi surface. However, in holographic theories studied previously, although one can introduce a hyperscaling violation exponent, it can take arbitary values and it has not been clear what would enforce that θ = d-1. By contrast, in the model we are considering it emerges very naturally simply by imposing eq:more_generalized_gthth.
This intriguing property suggests that further study of the variant model that we are discussing in this section may prove highly fruitful.
§ OUTLOOK
We do not want to claim that the particular model that we have studied here will itself explain everything about non-Fermi liquids. Nevertheless, it seems a much more viable starting point for studying non-Fermi liquids than previous holographic models, since it explicitly builds in the basic property of a Fermi surface satisfying Luttinger's theorem. An interesting future direction will be to consider adding perturbations to the strongly coupled quantum field theory that explicitly break the LU(1) symmetry, in order to model umklapp or disorder scattering; such perturbations have natural correspondences on the gravitational side through the holographic dictionary. One could also try to find perturbations that lead to an instability to a superconductor, or to another kind of ordered phase such as Ising-nematic.
One can also hope to use the model as a testing ground for hypothesized general statements about compressible metals; for example, according to the claims of Ref. <cit.>, if we explicitly break LU(1) but retain a ℤ^2 ×U(1) subgroup corresponding to lattice translation symmetry and charge conservation, then the system should flow under RG to one in which the LU(1) symmetry is restored in an emergent sense, since compressible systems with lattice translation symmetry are supposed to have an infinite-dimensional emergent symmetry group. This should be a testable statement in our model.
Finally, the approach of designing holographic IR effective theories based on emergeability conditions or by targeting particular emergent symmetries and anomalies may be useful in other contexts beyond non-Fermi liquid metals. For example, a superfluid can be characterized <cit.> by its emergent higher-form symmetry <cit.>, which has a mixed anomaly with the 0-form charge U(1). Thus, one could hope to find a holographic model of a strongly coupled superfluid by studying an appropriate dynamical gauge field in the bulk with a Chern-Simons term. This idea was previously proposed as a future direction in Ref. <cit.>.
§ ACKNOWLEDGMENTS
I thank T. Senthil, Zhengyan Darius Shi, Meng Cheng, Subir Sachdev, Blaise Goutéraux, and Eric Mefford for helpful discussions. I was partly supported by the EPiQS initiative of the Gordon and Betty Moore foundation, grant nos. GBMF8683 and GBMF8684. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development and by the Province of Ontario through the Ministry of Colleges and Universities.
§ SOLVING THE LINEARIZED EQUATIONS OF MOTION
From the bulk action described in Section <ref>, we obtain the classical equations of motion for the gauge field in the bulk:
∂_μ [(√(-g)) f^μν] = √(g_θθ)m/(2π)^2ϵ^νλγσ (∂_θ a_λ) (∂_γ a_σ),
where the indices range over the dimensions of space-time, but not the θ direction.
For the AdS_4 metric eq:ads4, this conveniently reduces to the same equations of motion as in flat space (when expressed in terms of the covariant field-strength tensor f_μν) since the (√(-g)) factor in the left-hand side exactly cancels the components of the inverse metric that appear when we raise the indices of f_μν.
According to the discussion in Section <ref>, we introduce the equilibrium configuration of the gauge field,
a_i^(0) = k_i(θ),
and then linearize eq:classical_eqs in perturbations about this configuration. Furthermore, we take all fields to vary as ∼ e^-i ω t + k_i x^i in the t,x,y directions, and we choose a gauge in which we set a_r = 0. We obtain four equations of motion corresponding to setting ν = x,y,t or r in eq:classical_eqs. The first three can be collectively expressed as
∂_r^2 𝒜 + ℳ∂_r 𝒜 + Γ𝒜 = 0,
where we defined
𝒜 = [ a_t; a_x; a_y ]
and
Γ = [ -(q_x^2 + q_y^2) -ω q_x -ω q_y; ω q_x ω^2 - q_y^2 q_x q_y; ω q_y q_x q_y ω^2 - q_x^2 ]
and
ℳ = [ 0 -u^x -u^y; -u^x 0 0; -u^y 0 0 ],
with
u^i = m g_θθ^-1/2/(2π)^2ϵ^ijd/dθ k_j(θ).
We will henceforth work in a coordinate system such that u_x = u > 0, u_y = 0.
The fourth equation of motion can be written as
i q_i F^r i - iω F^r t = -u^i e_i,
where e_i = -i ω a_i - i q_i a_t are the components of the electric field in the x and y directions. Given the identifications eq:current_identification, at r=0 this is precisely the statement of the anomalous conservation equation eq:conservation_eqn in the dual boundary theory. Observe that if we take the derivative of eq:the_fourth with respect to r, then it follows from the other three equations of motion. Therefore, the only effect of eq:the_fourth will be to a fix a constant of integration. For the moment, therefore, we just consider the solutions of eq:Aeqn.
Since this is a system of ODEs with constant coefficients, we can seek solutions of the form 𝒜∝ e^λ r, which gives
(λ^2 + λℳ + Γ) 𝒜 = 0.
This has a non-trivial solution for 𝒜 when
(λ^2 𝕀 + λℳ + Γ) = 0.
Solving this equation gives a double root at λ = 0, and the other four solutions are
λ = σ_1 √(-ω^2 + q_x^2 + q_y^2 + u/2(u + σ_2 √(4 q_y^2 + u^2))),
where σ_1 and σ_2 can take the values ± 1. If the argument of the outer square root is positive, then the boundary conditions at r →∞ require us to discard the solutions corresponding to σ_1 = +1 in eq:lambda, since they blow up exponentially as r →∞, and retain only the exponentially decaying solutions corresponding to σ_1 =-1. If the argument of the outer square root is negative, then λ is pure imaginary and the solutions correspond to radiative modes in the gravitational bulk that can propagate out to r →∞. In that case, the appropriate boundary condition to impose, consistent with causality, is that we keep only the mode that is radiating outwards from r=0, where the external fields are applied, towards r = ∞. This amounts to imposing that sgn(Imλ) = sgn(ω).
To allow us to handle both cases at once, we will take the convention that when the argument of the square root is negative, we choose the branch such that √(U) = -i sgn(ω)√(-U). Then we can always take the root with σ_1 = -1.
We remark that, since λ=0 is a double root, we have the solutions 𝒜 = 𝒜_0 and 𝒜 = 𝒜_0 r + 𝒜_1,
where 𝒜_0 and 𝒜_1 satisfy Γ𝒜_0 = 0 and ℳ𝒜_0 + Γ𝒜_1 = 0. One can show that
𝒜_0 = [ -ω; q_x; q_y ],
and
𝒜_1 = u/ω^2 - (q_x^2 + q_y^2)[ q_x; -ω; 0 ].
Therefore, so far we have shown is that the general solution will take the form
𝒜 = c_0 𝒜_0 + c_1 (𝒜_0 r + 𝒜_1) + c_2 𝒜_2 e^λ_2 r + c_3 𝒜_3 e^λ_3 r,
for some integration constants c_1,c_2,c_3,c_4. Here, λ and λ' correspond to eq:lambda upon setting σ_1 = -1 and [σ_2 = 1 (for λ) or -1 (for λ')], and 𝒜_2 and 𝒜_3 are the corresponding eigenvectors. Next we need to impose eq:the_fourth. Because, as already mentioned, the r derivative of eq:the_fourth follows from the other three equations of motion, imposing eq:the_fourth at one value of r will be enough to imply that it is satisfied at all values of r. By sending r →∞, we find that we must set c_1=0.
The eigenvectors 𝒜_2 and 𝒜_3 have a somewhat complicated form, making the general calculation rather burdensome. However, a general statement that one can make is that the equations of motion only depend on u and (ω,q_x,q_y). This justifies our statement that the result for the currents will be of the form eq:general_form [the factor of √(g_θθ) comes from the final identification of the currents in the boundary theory, eq:current_identification]. We ultimately relied on Mathematica to handle the tedious algebra, perform the expansion in 1/u described in Section <ref>, and finally obtain the result given in Section <ref> for the p=-1 and p=0 terms of the expansion eq:u_expansion [The Mathematica notebook file used for the computations is attached to the arXiv submission as an “ancillary file”.]. Here, however, in order to facilitate physical interpretation, we describe a simplified version of the calculation that can reproduce the leading-order terms in the result, i.e. the p=-1 term in eq:u_expansion
As u →∞, to leading order eq:lambda becomes
and
λ = ± u,
λ = ±√(-ω^2 + q_x^2).
One can show that to leading order, the corresponding eigenvectors take the form
[ 1; ± 1; 0 ]
and
[ 0; 0; 1 ]
respectively
[to this order, the eigenvectors corresponding to the pair of eigenvalues eq:some_lambda with opposite signs are equal].
Thus, the λ = ± u modes are “radial” modes that involve the component of the gauge field perpendicular to the Fermi surface (i.e. in the coordinate system we are using, the a_x component) as well as the time component, while the λ = ±√(-ω^2 + q_x^2) modes are “circumferential” modes that involve the component of the gauge field parallel to the Fermi surface.
Thus, to leading order, the general solution eq:general_solution (setting c_1 = 0) becomes
a_t = c_2 e^-u r - c_0 ω,
a_x = -c_2 e^-u r + c_0 q_x,
a_y = c_3 e^-√(q_x^2 - ω^2) r + c_0 q_y.
We demand that at r =0, a_t,a_x,a_y are equal to the applied background field A_t,A_x,A_y. This gives
c_0 = A_t + A_x/q_x - ω, c_2 = q_x A_t + ω a_x/q_x - ω, c_3 = A_y + (A_t + A_x) q_y/ω - q_X.
Finally, substituting into eq:current_identification and keeping only the terms that are formally of order p=-1 in the expansion eq:u_expansion gives the leading-order term in eq:jtjperp.
§ COMPUTING CHARGE FLUCTUATIONS
In this appendix we will derive the formulas Eqs. (<ref>) and (<ref>) from the leading term in the retarded Green's function of the densities. From the results in Section <ref>, keeping only the p=-1 term in the expansion eq:u_expansion, we obtain
G^R_n(θ) n(θ') (𝐪,ω) = m/(2π)^2𝐪·𝐰(θ)/ω - q_⊥δ(θ - θ').
This has the same form as a non-interacting Fermi gas in d=2 spatial dimensions, with the Fermi velocity equal to 1. For generality, let us consider general spatial dimension d [in which case the Fermi surface is a (d-1)-dimensional manifold], and general Fermi velocity v_F(θ). Then the equivalent of eq:GR is
G^R_n(θ) n(θ') (𝐪,ω) = m/(2π)^d𝐪·𝐰(θ)/ω - 𝐪·𝐯_F(θ)δ^d-1(θ - θ'),
where we defined 𝐯_F(θ) = v_F(θ) 𝐰(θ) / |𝐰(θ)|. In general dimension 𝐰(θ) is defined according to
w^i(θ) = ϵ^ij_1 · j_d-1∂_θ_1 k_j_1(θ) ⋯∂_θ_d-1 k_j_d(θ),
where (θ_1, ⋯, θ_d) is some coordinate chart for the Fermi surface, and 𝐤(θ) is the momentum of the Fermi surface as a function of θ.
Now from the fluctuation-dissipation theorem, we have that the equal-time connected correlator of the densities is given by
⟨ n(θ,𝐪) n(θ',-𝐪') ⟩ = 1/2π∫_-∞^∞ dω 2[1 + n_B(ω)]Im G^R_n(θ),n(θ')(ω,𝐪) ×δ^d(𝐪 - 𝐪').
where the Bose factor n_B(ω) is defined by n_B(ω) := 1/(e^ω/T-1). The only contribution to the imaginary part of eq:GR_general comes from the pole at ω = 𝐪·𝐯_F(θ) (which has to be resolved in the usual way by shifting ω infinitesimally off the real axis), so we obtain
Im G^R_n(θ),n(θ')(ω,𝐪) = π m/(2π)^d𝐪·𝐰(θ) δ[ω - 𝐪·𝐯_F(θ)] δ^d-1(θ - θ').
Hence, at zero temperature where 1+n_B(ω) is just a Heaviside step function, we find
⟨ n(θ,𝐪) n(θ',-𝐪') ⟩ = m/(2π)^d𝐪·𝐰(θ) Θ(q_⊥) ×δ^d-1(θ - θ') δ^d(𝐪 - 𝐪'),
where Θ is the Heaviside step function, and as before q_⊥ is the component of 𝐪 parallel to 𝐰(θ) (i.e. perpendicular to the Fermi surface). To avoid UV divergences, we will introduce a UV cutoff by multiplying the right-hand side of eq:nq_correlator by an additional factor of e^-a q_⊥, which defines the cutoff scale a. Then, taking the Fourier transform gives
⟨ n(θ,𝐱) n(θ',𝐱')⟩_c = m|𝐰(θ)|/(2π)^d+11/(x_⊥ - x'_⊥ + ia)^2δ^d-1(𝐱_∥ - 𝐱_∥') δ^d-1(θ - θ').
where x_⊥ is the component of 𝐱 parallel to 𝐰(θ), and 𝐱_∥ is the projection of 𝐱 into the plane parallel to the Fermi surface, i.e. normal to 𝐰(θ).
Finally, we can compute the charge fluctuation in a region M:
(Δ Q_M)^2 = ∫_M d^d 𝐱∫_M d^d 𝐱' ∫ d^d-1θ∫ d^d-1θ' ⟨ n(θ,𝐱) n(θ',𝐱') ⟩_c.
Substituting eq:nx_correlator, we find
(Δ Q_M)^2 = m/(2π)^d+1∫ d^d-1θ |𝐰(θ)| ∫_M_d-1 d^d-1𝐱_∥∫_x_⊥^-(𝐱_∥)_M^x_⊥^+(𝐱_∥)_M dx_⊥∫_x_⊥^-(𝐱_∥)_M^x_⊥^+(𝐱_∥)_M dx_⊥'
×1/(x_⊥ - x_⊥' + ia)^2,
where [x_⊥^-(𝐱_∥)_M, x_⊥^+(𝐱_∥)_M] denotes the intersection of M with the 1-dimensional line of fixed 𝐱_∥ (here for simplicity we have assumed that the Fermi surface is convex so that this intersection is just a single interval, but this is not essential), and we only integrate 𝐱_∥ over the region M_d-1⊆ℝ^d-1 such that this intersection is non-empty. Performing the integral over dx_⊥ and dx_⊥' gives
log(x_⊥^+ - x_⊥^- + ia) + log(x_⊥^- - x_⊥^+ + ia) - 2 log(ia).
Up to subleading contributions this is just 2log[ (x_⊥^+ - x_⊥^-)/a ].
Hence, we find
(Δ Q_M)^2 = 2m/(2π)^d+1∫ d^d-1θ |𝐰(θ)| ∫_M_d-1 d^d-1𝐱_∥log( Δ x_⊥(𝐱_∥)_M/a).
Now suppose that our region M is obtained from a region Γ by rescaling by a factor L. Then we can write eq:DeltaQM_1 as
(Δ Q_M)^2 = 2m/(2π)^d+1 L^d-1∫ d^d-1θ |𝐰(θ)| ∫_Γ_d-1 d^d-1𝐱_∥[ log L + log( Δ x_⊥(𝐱_∥)_Γ/a) ].
Hence we find that
(Δ Q_M)^2 = λ_Γ L^d-1log L + o(L^d-1log L),
with the coefficient
λ_Γ = 2m/(2π)^d+1∫ d^d-1θ∫_Γ_d-1 d^d-1𝐱_∥ |𝐰(θ)|.
We can recognize this as an equivalent way of writing eq:lambdaGamma.
|
http://arxiv.org/abs/2307.01373v2
|
20230703220136
|
A numerical variability approach to results stability tests and its application to neuroimaging
|
[
"Yohan Chatelain",
"Loïc Tetrel",
"Christopher J. Markiewicz",
"Mathias Goncalves",
"Gregory Kiar",
"Oscar Esteban",
"Pierre Bellec",
"Tristan Glatard"
] |
physics.med-ph
|
[
"physics.med-ph",
"cs.SE",
"D.2.5"
] |
Quantum theory of single-photon nonlinearities generated by ensembles of emitters
Dirk R. Englund
August 1, 2023
=================================================================================
Ensuring the long-term reproducibility of data analyses requires results stability tests to verify that analysis results remain within acceptable variation bounds despite inevitable software updates and hardware evolutions. This paper introduces a numerical variability approach for results stability tests, which determines acceptable variation bounds using random rounding of floating-point calculations. By applying the resulting stability test to , a widely-used neuroimaging tool, we show that the test is sensitive enough to detect subtle updates in image processing methods while remaining specific enough to accept numerical variations within a reference version of the application. This result contributes to enhancing the reliability and reproducibility of data analyses by providing a robust and flexible method for stability testing.
§ INTRODUCTION
Data analyses can produce different results depending on the hardware and software conditions in which they are executed <cit.>, which has important repercussions in several disciplines. This paper investigates results stability tests whereby the outcome of data analysis is asserted to remain within acceptable variation bounds of a reference result. The primary challenge in developing such stability tests lies in the determination of acceptable bounds of variation around the reference result. We define such bounds from the numerical variability of the results, that is, from the variability inherent to numerical computations.
We focus on the use case of neuroimaging analyses, although our method applies to data analyses more broadly. For several decades, the neuroimaging community has developed advanced software tools (e.g., FSL <cit.>, FreeSurfer <cit.>, ANTs <cit.>, or AFNI <cit.>) enabling researchers to study the human brain with unprecedented detail and precision. Since neuroimaging tools now underpin scientific findings in several disciplines, it is imperative to thoroughly test them. In particular, neuroimaging studies often follow subjects over multiple years, which requires data analyses to be consistent over substantial periods.
While existing research has focused on best practices for the accuracy and consistency of neuroimaging results <cit.>, it is also important to consider the computational environments. Indeed, software packages or operating systems can significantly impact specific area studies such as brain surface reconstruction and cortical thickness quantification. Notably, research has highlighted substantial differences across software packages working on identical brain data <cit.>. Moreover, studies underscored the effects of operating systems on measured cortical thickness across various software packages and versions <cit.>.
The present paper focuses primarily on the fMRIPrep software <cit.>, a tool to pre-process Magnetic Resonance Imaging (MRI) data as a pre-requisite of any further analysis. fMRIPrep is an integrated data analysis pipeline for structural and functional MRI pre-processing. We focus on the pre-processing of structural MRI, which includes intensity non-uniformity correction, skull stripping, and spatial normalization to a brain template. In response to these computational challenges that impact result stability, fMRIPrep developers recently initiated long-term support (LTS) releases to guarantee results stability over multiple years which is critical in longitudinal analysis. This motivated the development of the stability tests presented in this paper.
Our stability tests leverage the numerical variability resulting that arises from using finite-precision arithmetic in calculations. Such variability primarily emerges due to changes in computational environments, which includes hardware architecture, parallelization scheme, operating system, and software dependencies. To estimate numerical variability, we rely on random rounding <cit.>, a stochastic arithmetic technique that randomly rounds floating-point operation results to the previous or to the next floating-point number. Stochastic arithmetic has been successfully applied to simulate numerical variability in various domains, including neuroimaging <cit.>. Our approach is not specific to any particular numerical scheme and relies on a few statistical assumptions such as normality and independence, making it applicable to a wide range of scenarios.
In summary, the main contributions of this paper are the following:
* Define a numerical variability approach to results stability tests;
* Build results stability tests for structural pre-processing in ;
* Evaluate results of stability tests in several configurations.
§ RESULTS STABILITY TESTS DESIGN
Considering a data processing application Λ, the objective of our stability test is to determine whether the results generated by a different application Λ̃ significantly differ from the reference results produced by Λ. In practice, we are particularly interested in the case where Λ̃ corresponds to a different version of Λ or the same version executed in a different execution environment (operating system, parallelization or hardware).
Given input data I, we assume that Λ and Λ̃ produce images X and X̃ sampled on the same imaging grid (i.e., they showcase the same number v of voxels, and their orientation and resolution in physical coordinates do not differ by more than a very small error ϵ).
We model X as a random variable and we sample its distribution by computing n random numerical perturbations of Λ, resulting in n images X_k. Conversely, we compute X̃ without random perturbation using the IEEE-754 norm <cit.>, to avoid computational overheads at test time.
The statistical test used for determining if 'X̃ belongs to X' is discussed in detail in Subsection <ref>. To sample reference distribution, we employ two methodologies, further elaborated in Subsection <ref>.
To capture anatomical variability and other sources of variability arising from the scanning device and sequence parameters, we test a diverse set of individuals collected from several studies, as discussed in Subsection <ref>.
To achieve precise reproducibility of the unperturbed sample, we control variables such as random seeding and multi-threading within the application, as detailed in Subsection <ref>.
Table <ref> summarizes our notations and Figure <ref> provides a comprehensive overview of our test workflow.
§.§ Statistical model
We preprocess the computed images X,X̃, leading to images X^⊥, X̃^⊥ (see <ref>). To test if X̃^⊥ belongs to the distribution of X^⊥, we perform a z-test for each voxel within the union of B_k masks, x̃_i (i≤ v), using the mean μ̂_i and the standard deviation σ̂_i estimated from the k perturbed output.
The test computes a p-value p_i under the null hypothesis H_0,i that the tested voxel belongs to the reference distribution:
p_i(z_i) = 2 (1-Φ(z_i)),
where Φ is the cumulative distribution function of the normal centered Gaussian, and
z_i = x̃_i-μ̂_i/σ̂_i,
where is the intensity of a voxel x̃_i ∈X̃_k^⊥, v is the number of voxels within the union mask,
and μ̂_i and σ̂_i are the mean and standard deviation voxel intensities estimated
from the k perturbed results X_k^⊥.
H_0,i is rejected when p_i is lower than a threshold α that also defines the confidence level of the test (1-α)%.
The z-test assumes that perturbed voxel intensities are normally distributed.
The test defined in Equation <ref> consists of independent z-tests performed for each of the v voxels
from each test image, resulting in a set of v p-values p_i, i ≤ v.
Because v is in the order of one million (depending on the image's resolution and brain size), it is critical
to correct for multiple comparisons and set an upper bound to false positive tests <cit.>.
We adjust the significance α level with classical Bonferroni correction <cit.>.
Therefore, the tested result is considered out of the reference distribution iif:
∃ i ≤ v s.t. p_i ≤α/v.
§.§ Numerical variability estimation
To estimate the distribution of reference results and compute μ̂_i and σ̂_i, we sample results distributions by applying two types of random numerical perturbations: (1) Random Rounding (RR), which randomly rounds function outputs in the GNU libmath mathematical library, and (2) Random Seed (RS), which varies the random seed used in the application. In neuroimaging, random seeds are typically employed to initialize optimization processes encountered in spatial normalization.
Random Rounding (RR) consists in rounding the exact result of a floating-point arithmetic operation toward the previous or next floating-point number <cit.>. RR is equivalent to applying Monte-Carlo Arithmetic (MCA <cit.>) to double-precision numbers with a virtual precision of 53 bits and to single-precision numbers with a virtual precision of 24 bits, which was shown to accurately simulate the effect of operating system updates on the structural MRI pre-processing pipelines of the Human Connectome Project (HCP) when applied to GNU libmath <cit.>. Structural HCP pipelines consist of tools assembled from the FSL <cit.> and Freesurfer <cit.> toolboxes, which makes them conceptually very similar to the structural pipeline targeted by our study.
RR is rigorously implemented in several tools including CADNA <cit.>, Verrou <cit.>, and Verificarlo <cit.>.
However, these tools incur substantial performance overheads which makes them hard to apply to compute-intensive applications. In addition, only Verrou supports RR instrumentation of GNU libmath as a standalone library <cit.>, and it does so by relying on quadruple precision, which is not scalable to entire neuroimaging pipelines.
Therefore, we implemented a fast, approximate RR method by randomly adding or removing 1 ulp (unit in the last place) to the outputs of GNU libmath's functions.
Our implementation, available on GitHub[<https://github.com/verificarlo/fuzzy/blob/master/docker/resources/libmath/fast/src/wrapping_script.c>] under Apache 2.0 license, only approximates RR as it applies a random perturbation to an already rounded result instead of to the exact result as done in rigorous implementations.
In practice, computing the exact result returned by GNU libmath's functions using tools like MPFR <cit.> is too costly for our use case.
Random Seed (RS) and RR trigger different types of variability. RR can be applied transparently to any application while RS is more specific to the type of analysis. Conversely, RR incurs a substantial performance overhead whereas RS does not.
§.§ Preprocessing of 's outputs
For the application, the main structural derivative produced is X_k, that is, the T1-weighted MRI image corrected for intensity non-uniformity using from ANTS and transformed to template space using .
This file is named in the outputs. In addition to X_k, produces a brain mask B_k, a segmentation into grey matter, white matter and cerebrospinal fluid tissues, as well as probability maps for each of these tissues.
Before computing the p-values in Equation <ref>, we apply brain masking, smoothing, and intensity normalization to X_k. For brain masking, we mask X_k with the union of the brain masks produced across all perturbed results. We use the union of the brain masks rather than their intersection to capture variability across B_k masks.
For smoothing, we apply a spatial 3D Gaussian smoothing kernel with full-width at half-maximum () ranging from 0 mm to 20 mm.
The intensity values of X_k are normalized to the [0, 1] range with min-max scaling.
The resulting preprocessed image X_k^⊥ is used as input for the stability test.
§.§ Numerical stability measure
As a by-product of test construction, we can measure numerical variability in the application results, which provides valuable information about their numerical quality.
We quantify numerical stability for each voxel using the number of significant bits as a metric. This metric helps distinguish the bits containing signal from those containing only noise. We compute the number of significant bits ŝ with probability p_s=0.95 and confidence 1-α_s=0.95 using the Significant Digits package[<https://github.com/verificarlo/significantdigits>] (version 0.1.2).
Significant Digits implements the Centered Normality Hypothesis approach described in <cit.>:
ŝ_̂î = -log_2 | σ̂_̂î/μ̂_̂î| - δ(n, α_s, p_s),
where σ̂_̂î and μ̂_̂î are the voxelwise average and standard deviation over the X_k^⊥ perturbed results (k ≤ n), and
δ(n, α_s, p_s) = log_2 ( √(n-1/χ^2_1-α_s/2)Φ^-1( p_s+1/2) )
is a penalty term for estimating ŝ_̂î with probability p_s and confidence level 1-α_s for a sample size n.
Φ^-1 is the inverse cumulative distribution of the standard normal distribution and χ^2 is the Chi-2 distribution with n-1 degrees of freedom.
§.§ Data
We selected eight test subjects from sub-datasets in the OpenNeuro <cit.> data-sharing platform, representing a diversity of ages, sex, and study designs. The datasets include a motion study with children (ds000256), a long-term memory study with young adults (ds001748), and a motor process study with adults (ds002338). In addition, two sub-datasets involve steps of the pipeline that can affect its reproducibility, namely different field maps (ds001600) and non-structural images (ds001771). Table <ref> lists the dimension, voxels resolution, age and sex of each subject in the dataset.
§.§ Computing infrastructure
We processed the dataset using the Narval cluster managed by Calcul Québec and part of the Digital Research Alliance of Canada.
With our job submission parameters, we could access 1,145 computing nodes with 64 cores per node and 2 × AMD Rome 7532 @ 2.40 GHz 256M cache L3. We executed in a Singularity container built from a Docker image available on DockerHub .
The container image used Ubuntu version 16.04.6 (LTS) with Linux kernel 4.18.0[Kernel build 4.18.0-372.19.1.el8_6.x86_64], GNU libc/libmath 2.23, and 20.2.1.
We used Fuzzy built with Verificarlo version .
Further spurious variability can source from the ever-changing initial point of all random number generators reliant on stochastic processes that are involved in the processing (e.g., optimization).
To control for such variability, 's `master' random seed () as well as the skull stripping random seed were fixed. We also fixed multi-threading to perform calculations in a deterministic order.
§ RESULTS
We computed n=30 perturbed results for each of the eight subjects and for the two types of perturbations RR and RS. For RS, we used 's command-line interface to set the random seed in all the pipeline components. We also computed an unperturbed IEEE result for each subject using the random seed used in RR (42). For each type of perturbation, we measured the numerical stability of in terms of significant bits, and we built the stability test using different sizes of smoothing kernel and different confidence levels.
§.§ Measured numerical variability was high and it varied across subjects
Overall, the two types of perturbations (RR and RS) resulted in numerical uncertainties of comparable magnitude and behavior (Figure <ref>), which supports the validity of our results. The measured numerical variability was high, with mean significant bits ranging from 2.5 bits to 8.5 bits out of the 12 bits available in the data[The voxel intensity is encoded on 12 bits although it is embedded in a 16 bits format.]. The application appears to be highly sensitive to numerical and random seed perturbations.
We noted substantial discrepancies in numerical stability across subjects. For a given smoothing kernel size, the number of significant bits frequently varied in the ratio of 1 to 3 across subjects. Overall, smoothing tended to reduce numerical variability, however, this behavior was in general not monotonous and impacted subjects differently. The observed between-subject variability
demonstrates the importance of evaluating such applications on a representative set of datasets.
The numerical variability measured across perturbed samples showed regional variations compatible with anatomical features (Figure <ref> and Appendix <ref>). In particular, variability was maximal at the border of the brain mask, and it was overall higher in the gray matter than in the white matter.
This is consistent with previous observations of numerical variability in structural brain image analysis <cit.>.
In addition, numerical variability was also maximal in some focal regions, suggesting that spatial normalization may be unstable in these regions.
Our stability test will be more permissive, resulting in fewer rejections of voxels in these regions.
§.§ The results stability test passed sanity checks but usable FWHM and α values were data-dependent
We implemented three different sanity checks to evaluate the relevance of the stability test.
The leave-one-out check (<ref>) evaluates the specificity of the stability test, that is, its ability to accept results produced by the reference application Λ.
The IEEE check (<ref>) evaluates the specificity of the test by checking that it accepts the unperturbed application result and its sensitivity through the rejection of results produced by a different subject than the original result.
The corrupted template check (<ref>) evaluates the sensitivity of the stability test, that is, its ability to reject results produced by a corrupted version of the reference application.
§.§.§ Leave-one-out check
We implemented a “leave-one-out" (LOO) evaluation by constructing the stability test n times for n-1 perturbed results and applying it to the remaining perturbed result. By construction, the remaining perturbed result is sampled from the distribution of results produced by Λ and should therefore be accepted by the stability test.
To define a clear passing criterion for the LOO check, we modeled the LOO check using a binomial variable B(n,1-α) where n is the number of LOO iterations and 1-α is the probability that a perturbed sample is accepted by the stability test. Under H_0 for all voxels, we expect the following bound to be verified:
1-F(1_n;n,1-α) ≤α_0
where F(x;n,p) is the cumulative distribution function of the Binomial law B(n,p), and α_0=0.05.
*Results We applied the LOO check for different confidence values (1-α) and different FWHM values for the RR and RS perturbations (Figure <ref>). As expected, the stability test became increasingly permissive for increasing values of α (reduced confidence) and increasing values of FWHM. As previously observed, RR and RS behaved similarly overall. For each subject, there were values of α and FWHM such that the LOO check passed, which demonstrates the good specificity of the stability test.
However, the values of α and FWHM for which the LOO check passed importantly varied across subjects, presumably due to between-subject variations in input data quality and instability modes of spatial normalization. For instance, to pass the LOO check with α=0.05 for RR perturbations, subjects 5, 6, 7 and 8 required a smoothing size of FWHM = 12 mm and subjects 1 and 4 required FWHM = 15 mm. Subjects 2 and 3 required α=0.15 to pass the LOO check. Such discrepancies are not surprising given the heterogeneity in numerical variability previously observed across subjects. In practice, different values of α and FWHM must be used for each subject in the test dataset.
§.§.§ IEEE check
We constructed the stability test from the n perturbed results for each subject and applied it to the IEEE results (one per subject). The purpose of this check was twofold: (within subjects) to verify that IEEE results passed the stability test built from the reference distribution of their corresponding subject and (between subjects) to verify that IEEE results failed the stability test built from the reference distribution of other subjects.
*Within subjects for each subject, there was an α and FWHM pair such that all the within-subject IEEE checks passed (Figure <ref>). In particular, the within-subject IEEE check passed with FWHM = 15 mm for all subjects and α values for RR perturbations. For low FWHM sizes, the stability test rejected the IEEE sample of the reference subject, suggesting a lack of specificity in such cases.
*Between subjects The stability tests successfully rejected all the IEEE samples coming from other subjects, for all combinations of α and FWHM values, and consistently for both RR and RS perturbations. Therefore the stability test is sufficiently sensitive to detect between-subject variability even with a high smoothing kernel size.
We conclude from this experiment that the stability test is sensitive enough to reject results obtained from different subjects using the reference application and accept results obtained from the same subject using the reference application executed with random perturbations.
§.§.§ Corrupted template check
Multiple brain templates exist to spatially normalize subject data to a common space, and template selection substantially impacts the results <cit.>. Hence, errors in the template should lead to substantial differences in the results. The purpose of this check was to verify that results obtained from corrupted templates were correctly rejected by the stability test.
To do so, we generated corrupted versions of the MNI152NLin2009cAsym template used by where we incrementally zeroed the intensity of an increasing fraction of brain voxels selected uniformly. We then executed in IEEE mode (without random perturbations) for each resulting corrupted template and each subject.
*Results The stability test correctly rejected results obtained with the corrupted template above a subject-dependent threshold of corrupted voxels (Figure <ref>). Smoothing had a measurable effect that did not manifest clearly across subjects. We conclude that the stability test is sensitive to changes in the brain template which is a common source of variability in neuroimaging.
§.§ The results stability test detected subtle updates in image processing methods
The previous sanity checks demonstrated a satisfying level of sensitivity and specificity of our stability test for simple scenarios. Subsequently, we applied it to our main motivating use case: the detection of results differences in LTS versions. We tested the results produced by different versions of , having built the results stability test for version 20.2.1. The goal of this experiment was to evaluate the ability of the stability test to detect significant software updates in . The stability test accepted results produced by versions 20.2.0 to 20.2.4 for all subjects (Figure <ref>). However, the test rejected results produced by versions 20.2.5 and onwards, suggesting a substantial change in the analysis methods from version 20.2.5. Investigations with the developers revealed that the interpolation scheme involved in the resampling of piece-wise-smooth brain segmentation maps from FreeSurfer's preferred format (MGZ) format into NIfTI 1.0 format was changed in version 20.2.5 from trilinear interpolation to nearest neighbor (see pull-request https://github.com/nipreps/smriprep/pull/268<github.com/nipreps/smriprep/pull/268>). The ability of our results stability test to detect such subtle changes in the analysis supports its relevance in the release process of long-term support data analysis software.
§ DISCUSSION
We introduced a novel approach to build results stability tests for data analyses using numerical variability, with a specific focus on neuroimaging and the widely used software. While libraries to test numerical stability exist and have been applied in other contexts, to our knowledge, this is one of the first attempts to systematically apply such an approach in the domain of neuroimaging and possibly in other data analysis domains.
This approach does not require an exact solution, a reference solution, or even an acceptable bound of variation around the computed solution.
Applying this approach to , a highly popular neuroimaging preprocessing pipeline, we demonstrated non-negligible, subject-specific numerical errors that the tool introduces in the results as a consequence of numerical instability.
In addition, we showed how the framework successfully detected subtle algorithmic updates through releases, while remaining specific enough to accept numerical variations within a reference version of the application.
Based on our findings, we contend that benchmarking neuroimaging and other computational tools is essential. This not only includes accuracy against gold or golden standards in cases where a ground-truth reference is absent and performance but also numerical stability.
Our stability test represents a practical tool that has the potential for widespread application, not just limited to the development environment. Our results stability test is compatible with continuous integration platforms, enabling automated validation of new commits. By choosing suitable data and confidence values, our results stability test can serve as a valuable addition to conventional code testing methodologies. Additionally, the figure templates presented in this paper can be reused to create informative visualizations for dashboards or similar data presentation platforms. This way, the results are easily interpretable and actionable.
Designing the results stability test involved multiple choices that we discuss hereafter.
First, we employed a parametric statistical test (Equation <ref>) that is less compute-intensive than non-parametric methods that typically demand larger sample sizes — in our case, more than 100 repetitions per subject — to achieve acceptable confidence levels (≥ 0.95).
Indeed, the computational and storage costs required to apply stochastic arithmetic to neuroimaging tools such as are high, although they are incurred only during the construction of the test and not during its evaluation, which therefore does not impact test users. Moreover, our stability test operates at the level of individual voxels and is accepted or rejected globally for a given image. Refining the test by regions of interest in the image could provide additional insights about the application behavior.
The Bonferroni correction for multiple comparisons is conservative in our case given that the voxel intensities in a brain image are not independent.
Possible alternatives to this correction are presented in <cit.>. The use of Bonferroni correction reduces the sensitivity of our test and increases its specificity.
The random rounding method adopted to sample the null distribution of results produced by the reference application can be applied regardless of the type of input and output data involved. In our use case, random rounding resulted in comparable variability to the one obtained from varying the random seed, which increased the confidence in our results. However, it is important to note that random rounding and random seed are different types of perturbations, and there is no guarantee of producing comparable variability patterns in general. We applied random rounding to functions of the GNU libmath library since previous experiments had shown that neuroimaging analyses commonly rely on these functions. Other types of applications may require that random rounding be applied to other libraries. For instance, the work in <cit.> reports on the use of random rounding for deep neural networks, which required instrumenting the entire Tensorflow application and its dependencies.
The size of the spatial smoothing kernel had a notable impact on the performance of the stability test. Smoothing was required for the test to accept results produced by the reference application and therefore reach an acceptable level of specificity.
The smoothing process transforms the distribution of voxel intensities towards a Gaussian distribution, thus better aligning with the main assumption of the test.
Smoothing is a prevalent operation in neuroimage analysis to improve the signal-to-noise ratio. However, FWHM values usually remain lower than 8 mm while our test required 15 mm for most subjects. It would be interesting to further investigate why such large smoothing kernel sizes were required since large smoothing kernel sizes reduce the sensitivity of our test.
The results stability test behaved quite differently across the 8 test subjects, which motivates the inclusion of various datasets in such test cases.
To build our test, we selected subjects to reflect diverse acquisition parameters, resolutions, ages, and sexes.
However, it would be interesting to investigate two distinct effects: (1) differences between subjects, and (2) differences due to scanners or sequence parameters. For the first case, using a dataset from a single study is more appropriate since it consists of subjects who have undergone identical experimental protocols and procedures, thereby ensuring better control over acquisition parameters.
In the second case, to examine the impact of differing scanners or sequence parameters, a traveling subject approach would be more effective. In this method, a single subject undergoes MRI scans at various locations, using different scanners or imaging protocols. This controlled approach provides valuable insight into the influence of these variables
In future work, we plan to extend our methodology to other data types and investigate its applicability under different statistical hypotheses.
Results stability tests could be applied to diverse scientific domains beyond neuroimaging, further enhancing the reliability and reproducibility of computational results. In the short term, we plan to extend our test to functional neuroimaging data, which involves 4D images.
§ ACKNOWLEDGMENTS
Computations were made on the Narval supercomputer from École de Technologie
Supérieure (ETS, Montréal), managed by Calcul Québec and the Digital Research Alliance of Canada. The
operation of this supercomputer is funded by the Canada Foundation for
Innovation (CFI), Ministère de l’Économie, des Sciences et de l’Innovation du
Québec (MESI) and le Fonds de recherche du Québec – Nature et technologies
(FRQ-NT).
IEEEtran
§ BIOGRAPHY SECTION
[
< g r a p h i c s >
]Yohan Chatelain
is a postdoctoral researcher in the
Big Data Infrastructures for Neuroinformatics lab at the University of
Concordia, Montreal, Canada.
He received his Ph.D. degree from the University of Paris-Saclay
(UVSQ), Versailles, France in 2019. Yohan research topics include computer
arithmetic, high-performance computing, and reproducibility. Yohan aims at
democratizing the use of the analysis of stability for scientific computing
codes through automatic tools to improve numerical quality.
[
< g r a p h i c s >
]Loïc Tetrel
is an R&D Engineer on Kitware Europe’s computer vision team.
Prior to joining Kitware, Loïc was a Data Scientist in Neuroscience at SIMEXP lab (University of Montreal) where he worked on neuroimaging software, machine learning, and research data platforms. He contributed to TensorFlow, Nilearn and Binderhub. Loïc also worked at Straumann Group as a computer vision engineer for 2 years, mainly working on 3D reconstruction algorithms.
He holds an engineering degree from INSA Lyon and an M.A.Sc from ETS Montréal.
At Kitware Europe, Loïc brings his competencies in Machine Learning and 3D modeling to the team, in particular in the medical domain.
[
< g r a p h i c s >
]Christopher J. Markiewicz
is a software developer for the Poldrack Lab at Stanford University and a research affiliate of the McGovern Institute for Brain Research. He has been a core developer of fMRIPrep and FitLins, and is a contributor to and maintainer of several open-source, Python neuroimaging libraries, including Nipype and NiBabel.
[
< g r a p h i c s >
]Mathias Goncalves
is a software developer in the Department of Psychology at Stanford University, working in the Poldrack Lab. He is interested in building and improving tools for neuroimaging analysis and develops several open-source projects including fMRIprep and Nipype.
[
< g r a p h i c s >
]Gregory Kiar is a research scientist at the Child Mind Institute. Throughout his
degrees in Biomedical Engineering, Greg has developed techniques to study
biosignal data, a turn-key tool that allows researchers to generate maps of
brain connectivity from diffusion MRI data, and techniques to assess and
improve the stability of neuroscience research. Greg’s research bridges
the fields of numerical analysis and brain imaging to evaluate and improve
the trustworthiness of techniques used to study the brain.
[
< g r a p h i c s >
]Oscar Esteban
is a Research and Teaching Ambizione FNS Fellow at the Service of Radiology of the Lausanne University Hospital (CHUV) and the University of Lausanne. Oscar’s research aims at pushing the boundaries of neuroimaging — magnetic resonance imaging (MRI) mostly, — and by that, help other researchers advance our understanding of the human brain. In more specific terms, Oscar is currently developing tools that cater to researchers with “analysis-grade” data (see www.nipreps.org for more on this concept,) so they can focus on statistical modeling and inference. Perhaps, the flagship of these tools is fMRIPrep. This drive for the preprocessing step of the neuroimaging research workflow is justified by the concerning methodological variability that negatively contributes to reproducibility in the field. In particular, Oscar wants to improve the computational reproducibility of our results and minimize this methodological variability in the preprocessing step by standardizing workflows and reaching consensus implementations. In the longer term, Oscar’s vision is to contribute to uncovering the interplay of structure, function, and dynamics of brain connectivity using MRI.
[
< g r a p h i c s >
]Pierre Bellec
is the principal investigator of the laboratory for brain simulation and exploration (SIMEXP), as well as the scientific director of the Courtois Project on Neuronal Modelling (CNeuroMod), which uses human neuroimaging data to help train large artificial neural networks on a variety of cognitive tasks. He is also the director of Unité de neuroimagerie fonctionnelle at the “Centre de recherche de l’institut de gériatrie de Montréal” and an associate professor at the Psychology department at Université de Montréal. Dr Bellec is a senior fellow (”chercheur boursier senior”) of the ”Fonds de Recherche du Québec - Santé”.
[
< g r a p h i c s >
]Tristan Glatard is Associate Professor in the Department of Computer Science and
Software Engineering at Concordia University in Montreal, Canada
Research Chair (Tier II) on Big Data Infrastructures for Neuroinformatics.
Before that, he was a research scientist at the French National Centre for
Scientific Research and Visiting Scholar at McGill University.
§ NUMERICAL VARIABILITY RESULTS FOR DIFFERENT SMOOTHING KERNEL SIZES
|
http://arxiv.org/abs/2307.01982v1
|
20230705015550
|
An Envy-Free Online UAV Charging Scheme with Vehicle-Mounted Mobile Wireless Chargers
|
[
"Yuntao Wang",
"Zhou Su"
] |
cs.GT
|
[
"cs.GT"
] |
[
Kolahal Bhattacharya
========================
In commercial unmanned aerial vehicle (UAV) applications, one of the main restrictions is UAVs' limited battery endurance when executing persistent tasks.
With the mature of wireless power transfer (WPT) technologies, by leveraging ground vehicles mounted with WPT facilities on their proofs, we propose a mobile and collaborative recharging scheme for UAVs in an on-demand manner. Specifically, we first present a novel air-ground cooperative UAV recharging framework, where ground vehicles cooperatively share their idle wireless chargers to UAVs and a swarm of UAVs in the task area compete to get recharging services. Considering the mobility dynamics and energy competitions, we formulate an energy scheduling problem for UAVs and vehicles under practical constraints. A fair online auction-based solution with low complexity is also devised to allocate and price idle wireless chargers on vehicular proofs in real time. We rigorously prove that the proposed scheme is strategy-proof, envy-free, and produces stable allocation outcomes. The first property enforces that truthful bidding is the dominant strategy for participants, the second ensures that no user is better off by exchanging his allocation with another user when the auction ends, while the third guarantees the matching stability between UAVs and UGVs. Extensive simulations validate that the proposed scheme outperforms benchmarks in terms of energy allocation efficiency and UAV's utility.
§ INTRODUCTION
The emerging unmanned aerial vehicles (UAVs) have gained significant success in various applications such as crop surveys, search and rescue, and infrastructure inspection <cit.>.
Thanks to their low cost, flexible deployment, and controllable maneuverability, UAVs mounted with rich onboard sensors can be fast dispatched to enable autonomous and on-demand mission execution (e.g., sensing and communication recovery) anytime and anywhere <cit.>.
However, commercial UAVs such as quadrotors generally have stringent space and weight limitations, causing inherent constrained battery endurance to support long-duration missions. For example, most mini-UAVs (powered by lithium-ion or lithium polymer batteries) only afford up to 90 minutes of endurance <cit.>. Besides, in executing complex and persistent tasks, relevant compute-intensive operations with video streaming and image processing may consume a considerable amount of UAV's battery energy <cit.>.
Notably, increasing UAV's battery capacity beyond a certain point can degrade its flight time due to excessive weight <cit.>.
Hence, it is crucial to design effective battery recharging approaches to sustain the life cycle of a UAV flight.
A number of research efforts have been made to address the UAV's battery recharging issue, which can be mainly divided into three types: energy harvesting <cit.> from the environment (e.g., solar and wind energy), battery hotswapping <cit.> at battery swap stations, and wireless charging <cit.> using wireless chargers. In energy harvesting, the energy output of outfitted photovoltaic (PV) arrays or turbine generators can be intermittent and uncertain and highly rely on weather conditions. Besides, the additionally added size and weight on the UAV may raise difficulty in safely landing at dedicated locations.
In battery hotswapping, it generally involves human labors to replace UAV's depleted battery with a fully charged one, affecting autonomous UAV operations in inaccessible or hazardous places <cit.>.
Moreover, it can incur high round-trip energy costs for frequent battery replacement operations.
With the recent breakthrough in wireless power transfer (WPT) techniques, UAVs can be conveniently charged by distributed wireless chargers in a fully automatic manner <cit.>.
As reported, the commercial WPT product of Powermat company can transfer 600 W of wireless power over a distance of up to 150 mm to small or medium UAVs with over 90% energy efficiency and high misalignment tolerance <cit.>.
However, deploying and maintaining such static wireless chargers at large-scale task areas (e.g., survivor rescue in disaster sites) can be costly and time-consuming, especially in environmentally harsh terrains.
Besides, the solutions built on static wireless chargers usually lack feasibility and on-demand energy supply capabilities for UAVs, as well as restricting UAVs' operations within specific geographical areas.
In this paper, as shown in Fig. <ref>, we focus on an on-demand and cost-effective solution by leveraging unmanned ground vehicles (UGVs) with controllable mobility, where UGVs equipped with wireless charging facilities are deployed in task areas and collaboratively offer sufficient wireless energy supply to prolong the lifetime of the UAV network.
In academia and industry, research works <cit.> and companies such as Renault <cit.> and DSraider <cit.> have developed such mobile and collaborative platforms for efficient UAV launching, recycling, and recharging using a special compartment in the UGV's roof.
Despite the fundamental contributions on system and protocol design of existing literature <cit.>, the double-side energy scheduling along with user fairness in UGV-assisted wireless rechargeable UAV networks (VWRUNs) are rarely studied, which motivates our work.
On one hand, compared with fixed chargers, the size of charging (also landing) pads on roofs of UGVs are comparably smaller <cit.>, thereby restricting the number of concurrent charging UAVs. Moreover, in complex missions (e.g., large-scale surveillance), it usually depends on the coordination among multiple UAVs due to the limited capacity (e.g., sensing range) of a single UAV <cit.>.
Consequently, in highly dynamic VWRUNs, efficient real-time charging scheduling among multiple UAVs and multiple UGVs is of necessity to motivate their energy cooperation.
On the other hand, as UAVs and UGVs are self-interested agents and mutually distrustful, they may behave strategically to maximize their gains and even perform market manipulation by diminishing the legitimate interests of others <cit.>. For example, strategic UGVs may collude to overclaim their energy costs for higher payments from UAVs. Besides, the envy-freeness (i.e., no agent envies the allocation of another agent) <cit.>, as an essential metric to ensure market fairness, is neglected in most of existing works. The violation of envy-freeness may raise low user willingness and acceptance, eventually lowering energy allocation efficiency.
Therefore, it remains an open and vital issue to design a real-time and envy-free charging strategy among UAVs and UGVs while preventing strategic behaviors and motivating their dynamic cooperation in VWRUNs.
To address the above issues, we adopt a market-based approach by formulating the double-side charging scheduling problem among multiple UAVs and UGVs in VWRUNs as an online sealed-bid auction, where both UAVs (i.e., buyers) and UGVs (i.e., sellers) are allowed to send their sealed bidding information (including trading time, valuation, supply/demand energy volume, etc.) anytime to the auctioneer (i.e., the ground station).
The auctioneer collects the bids within a maximum waiting time and publishes the auction outcome when the auction ends. UAVs that fail to match a desired UGV can participate the next-round auction or alternatively fly to a nearby static energy swap/charging station to replenish energy.
Besides, our proposed scheme allows UAVs and UGVs to dynamically join and exit the auction process.
Using rigorous theoretical analysis, we prove that the proposed battery recharging auction is strategy-proof (i.e., able to resist strategic agents) and produces envy-free allocations (i.e., ensuring market fairness).
The main contributions of this paper are summarized as below.
* We propose an on-demand and collaborative UAV recharging framework by employing idle wireless chargers mounted on mobile UGVs to replenish energy for multiple UAVs in executing long-term tasks. An optimization problem for energy scheduling is formulated in VWRUNs based on the UGV type model and UAV's state-of-charge (SoC) model under practical constraints.
* We devise an online auction-based approach to solve the real-time energy scheduling problem with low complexity, consisting of the winner determination phase and pricing phase. We theoretically analyze the equilibrium strategy of participants and rigorously prove its strategy-proofness, envy-freeness, and stability.
* We carry out extensive simulations to evaluate the feasibility and effectiveness of the proposed scheme. Numerical results demonstrate the superiority of the proposed scheme in terms of energy allocation efficiency, UAV's utility, and social surplus, in comparison with conventional schemes.
The remainder of this paper is organized as follows. Section <ref> surveys the related literature, and Section <ref> introduces the system model. The detailed design of the proposed scheme is presented in Section <ref>, and its performance is evaluated in Section <ref> using simulations. Finally, this paper is concluded in Section <ref>.
§ RELATED WORKS
In this section, we review related literature on static/mobile wireless charging solutions and charging scheduling approaches in UAV networks.
§.§ Static and Mobile Wireless Charging Solutions for UAVs
In modern UAV applications, limited battery capacity poses a significant operational challenge to UAVs' flight durations, especially in executing large-scale persistent missions.
Compared with contact-based conductive charging techniques, the promising WPT technology offers a contact-free and fully automatic wireless charging solution for UAVs while withstanding challenging weather conditions <cit.>.
Based on the transmission range, current WPT techniques for UAVs can be categorized into two types <cit.>: near-field and far-field. The former mainly includes magnetic resonance coupling (MRC) <cit.> and capacitive coupling <cit.>, while the latter usually refers to as non-directive radio frequency (RF) radiation such as laser charging <cit.> and WISP-reader charging <cit.>.
Existing WPT-based UAV charging approaches are mainly built on static wireless charging pads located at building rooftops, power poles, cell towers, etc. For large-scale UAV missions, it highly relies on and bears the costly deployment/maintenance fee of additional wireless charging infrastructures.
In the literature, few works have attempted to design mobile wireless chargers to build a feasible and on-demand solution to sustain large-scale persistent UAV operations. Wu et al. <cit.> implemented a collaborative UAV-UGV recharging system, where the UGV equips with an object tracking camera (to automatically maneuver towards the UAV) and a Qi charger on the landing pad (to wirelessly transfer energy to the UAV after landing).
To address the misalignment issues between transmit and receiving coils in WPT-based UAV recharging systems, Rong et al. <cit.> designed an optimized coupling mechanism for UAVs with high misalignment tolerance based on the genetic algorithm. A real implementation shows that the design UAV recharging system can transfer a maximum power of 100W with the WPT efficiency of 92.41%.
Ribeiro et al. <cit.> investigated the route planning problem for multiple mobile charging platforms (which can travel to different locations) to support long-duration UAV operations. In <cit.>, the routing problem is formulated as a mixed-integer linear programming (MILP) model and solved using a genetic algorithm together with a construct-and-adjust heuristic method.
There have been several recent studies leveraging WPT for wireless services and UAV services. Wu et al. <cit.> proposed a non-orthogonal multiple access (NOMA)-aided federated learning (FL) framework with WPT, where the base station uses WPT to recharge end devices that perform local training and data transmission in FL. A layered algorithm was also designed in <cit.> to minimize FL convergence latency and overall energy consumption under practical constraints.
Shen et al. <cit.> studied a UAV-aided flexible radio resource slicing mechanism in 5G uplink radio access networks (RANs), where the joint 3D placement of UAVs and UAV-device association problem was formulated via an interference-aware graph model. In addition, a lightweight approximation algorithm and an upgraded clique method were devised in <cit.> for reduced complexity.
However, existing works mainly focus on the system design and trajectory planning of VWRUNs, whereas the double-side energy scheduling among UAVs and UGVs along with the user fairness in the energy charging market are rarely studied.
§.§ WPT Charging Scheduling Methods for UAVs
In the literature, there has been an increasing interest in designing WPT-based charging scheduling methods for UAVs.
Zhao et al. <cit.> proposed a power optimization method in a static laser charging system for a rotary-wing UAV. A non-convex optimization problem is formulated with coupling variables and practical mobility, transmission, and energy constraints, and two algorithms are devised to search the optimal strategy with guaranteed convergence to stationary solutions.
Li et al. <cit.> designed an energy-efficient charging time scheduling algorithm to turn on static wireless chargers (SWCs) in scheduled time periods with the aim to minimize SWCs' energy waste in wirelessly charging UAVs. In their scheme, UAVs' continuous flight trajectories are discretized in both temporal and spatial dimensions, and a pruning-based exhaustive method is devised for near-optimal solution searching.
Yu et al. <cit.> studied the problem of route planning for a mini-UAV to visit multiple sensing sites within its battery lifetime, where UGVs acting as mobile recharging stations can offer battery recharging services for the UAV along its tour.
Shin et al. <cit.> presented a second-price auction-based charging time scheduling method between multiple UAVs and a ground vehicle, where the ground vehicle serves as a mobile charging station and the auctioneer. In their auction, multiple UAVs bid for the vehicle's charging time slots, and the UAV with the highest valuation is assigned as the winner.
One can observe that existing works on UAV charging mainly focus on the one-to-one charging pattern <cit.> or many-to-one charging pattern <cit.>, which is not applicable in our considered scenario with multiple UAVs and multiple UGVs. Besides, due to the high dynamics of VWRUNs and potential strategic entities during charging scheduling, real-time and fair energy scheduling should be enforced, which is rarely studied. Distinguished from existing works, we design an envy-free online auction mechanism for efficient many-to-many charging scheduling among multiple UAVs and UGVs with consideration of their mobility dynamics, double-side competition, and practical constraints.
§ SYSTEM MODEL
In this section, we first elaborate on the system model including the network model (in Sect. <ref>) and UAV energy consumption and wireless charging model (in Sect. <ref>). The notations used in this paper is summarized in Table <ref>.
§.§ Network Model
As depicted in Fig. <ref>, we consider a typical scenario of VWRUN in a given investigated area, which mainly consists of a swarm of I UAVs, a fleet of J UGVs, and a ground station (GS).
UAVs. Due to the limited sensing coverage and energy supply of a single UAV, a swarm of UAVs, denoted by the set ℐ = {1,⋯, i,⋯,I}, are dispatched to collaboratively execute a common mission (e.g., air quality monitoring and geographic surveying) in the given task area <cit.>. The sensing spot of UAV i ∈ℐ is denoted as a circle with center (x_i,y_i,0) and radius R_i. UAVs can communicate with each other using air-to-air (A2A) communications <cit.>.
Let T denote the finite time horizon, which is evenly divided into N time slots with duration Δ_t <cit.>, i.e., T = N·Δ_t. The instant 3D location of UAV i ∈ℐ at t-th time slot (1≤ t ≤ T) is denoted by 𝐥_i[t] = (x_i[t],y_i[t],z_i[t]), where (x_i[t],y_i[t]) is its instant horizontal coordinate.
The instant altitude z_i[t] of UAV i satisfies
R_i (θ_i) ≤ z_i[t] ≤ z_max,
where θ_i is the maximum detection angle of UAV i's sensor and z_max is its maximum flight altitude. Besides, ||𝐥_i[t+1] - 𝐥_i[t]|| ≤v_maxΔ_t, ∀ 1≤ t ≤ T-1, where v_max is the maximum velocity of UAV.
The state-of-charge (SoC) of UAV i's on-board battery at t-th time slot is s_i[t], which satisfies s_min≤ s_i[t] ≤ C_i. Here, C_i is the battery capacity of UAV i and s_min is the minimum reserved battery energy to prolong the battery lifetime <cit.>. For UAV i, when its remaining battery SoC s_i[t] is below the alert level s_min, it leaves its sensing spot for recharging and another UAV can cooperatively replace this low-battery UAV i at the target sensing spot to offer uninterrupted sensing service <cit.>. The charging urgency of each UAV i is computed as
ρ_i[t] = 1 - s_i[t] - s_min/C_i, and ρ_i[t] ∈ [0,1].
UGVs. A fleet of UGVs equipped with wireless charging facilities on the vehicular roofs are deployed in the investigated area to collaboratively offer on-demand wireless energy supply for low-battery UAVs <cit.>. The set of UGVs is denoted as 𝒥 = {1,⋯, j,⋯,J}. UGVs are smart vehicles integrated with various advanced sensors to allow self-driving to the rendezvous and perform automatically UAV tracking, launching, and recharging operations on the charging pad on vehicular roofs. Let C_j and s_j[t] be the total/remaining wireless energy supply of UAV recharging of UGV j, respectively. It is assumed that s_j[t] ≥max{s_i^sat - s_i[t], ∀ i ∈𝒩}.
Besides, UGVs generally have diverse quality of recharging service (QoRS), which is affected by various factors such as the wireless charging rate and the driving distance to the task area. Let q_j[t] denote the QoRS of UGV j at t-th time slot. A higher QoRS indicates the higher charging preference of UAVs.
GS. In VWRUN, the aerial UAV subnetwork and the ground vehicular subnetwork are coordinated by the GS (denoted by Λ) <cit.>. The GS is located at a micro base station and can perform flight planning, flying control, and task assignment for UAVs via ground-to-air (G2A) links. Moreover, after receiving recharging requests from UAVs, the GS can schedule the UGVs with idle wireless chargers in its communication range via infrastructure-to-vehicle (I2V) links <cit.> to offer on-demand recharging services.
§.§ UAV Energy Consumption and Wireless Charging Model
To avoid collisions and save energy in the flight, UAVs need to horizontally fly over the task area and hover above the assigned task spot to perform sensing missions <cit.>. In energy recharging process, for simplicity, each UAV i vertically descends to the target UGV's proof and vertically ascends to a preset altitude after reaching the satisfactory SoC level s_i^sat <cit.>. According to <cit.>, the required flying power at a constant speed v_i can be approximately attained as:
P_i^fly (v_i) = κ_1 v_i^3 + (κ_2 + κ_3) Ψ^3/2,
where κ_1, κ_2, κ_3 are constant UAV-related factors, Ψ is the thrust of UAV <cit.>. The hovering power of UAV i is P_i^hov = (κ_2 + κ_3)(m g)^3/2, where m is UAV's mass and g=9.8 m/s^2. Given the fixed descending speed v_i^d and ascending velocity v_i^a, the required power of UAV in the descending process and ascending process can be separately expressed as <cit.>:
P_i^d (v_i^d) =ϵ_1 m g [√((v_i^d)^2/4+m g/(ϵ_2)^2) - v_i^d/2] + κ_3(m g)^3/2,
P_i^a (v_i^a) =ϵ_1 m g [√((v_i^a)^2/4+m g/(ϵ_2)^2) + v_i^a/2] + κ_3(m g)^3/2,
where ϵ_1, ϵ_2 are constant UAV-related factors.
Let P_j^e denote the wireless power transferred by the UGV j. Then, the battery dynamics of UAV i can be described as a linear model, i.e.,
s_i[t+1] = s_i[t] +
[ α _i^1 α _i^2 α _i^3 α _i^4 α _i^5
] ×[ [ - η_i P_i^fly; - η_i P_i^hov; - η_i P_i^d; - η_i P_i^d; η_i η_j P_j^e; ]]
,
where α _i^u ={0,1},u={1,⋯,5} are binary variables, denoting the state of UAV i. η_j is the wireless power efficiency of UGV j. Here, α _i^u=1 means that UAV i is in the corresponding state (i.e., horizontally flying, hovering, vertically descending, vertically ascending, or wireless charging); otherwise, α _i^u=0.
§ THE PROPOSED SCHEME
This section first formulates the online charging scheduling and pricing problem for UAVs and UGVs in VWRUNs (in Sect. <ref>). Then, an auction-based solution with strategy-proofness and envy-freeness is designed, followed by the theoretical analysis of its properties (in Sect. <ref>).
§.§ Online Charging Scheduling and Pricing (OCSP) Problem
As shown in Fig. <ref>, the auction-based UAV charging scheduling process is carried out by the GS in an online manner, where UAVs are allowed to bid at anytime and the bid collection phase finishes until a maximum waiting time τ elapses.
Let b = (b_1,⋯,b_i,⋯,b_I(τ)) denote the bid profile of all UAVs in ℐ(τ) ⊆ℐ. Here, ℐ' = ℐ(τ) is the set of low-battery UAVs with s_i[t] ≤ s_min, ∀ i ∈𝒩, ∀ t ∈τ. b_-i is the bid profile of other UAVs except UAV i.
In the auction, the valuation (i.e., reserve price) of UAV i in a recharging service is associated to its energy state and charging urgency, i.e., Φ_i[t] = Φ_i(ρ_i[t]). Generally, the higher charging urgency, the larger valuation. Besides, the higher charging urgency, the larger marginal valuation. Hence, dΦ_i(ρ_i[t])/dρ_i[t]> 0, d^2 Φ_i(ρ_i[t])/dρ_i[t]^2≥ 0. In the following, we define utility functions of UAVs and UGVs, as well as the social surplus.
The utility function of UAV i ∈𝒩 is the revenue minuses its payment:
𝒰_i = {[ ∑_j∈𝒥β_i,j[q_j Φ̅_i - p_j( b)], i ∈ℐ',; 0, i ∈ℐ\ℐ'. ].
Remark. In Eq. (<ref>), the binary variable β_i,j={0,1} indicates the allocation outcome, where β_i,j=1 if UAV i is allocated to get charged at UGV j. Otherwise, β_i,j=0. q_j is the QoRS of UGV j, which is assumed to remain unchanged during the time window τ. Φ̅_i is the average valuation of UAV i during the time window τ, which is computed as Φ̅_i = ⌊τ/t⌋^-1·∑_t∈τΦ_i[t]. p_j(b) is the payment to UGV j.
The utility function of UGV j ∈𝒥 is associated with its payment, i.e.,
𝒰_j = {[ ∑_i∈ℐ'β_i,j p_j(b), j ∈𝒥',; 0, j ∈𝒥\𝒥'. ].
Remark. In Eq. (<ref>), 𝒥'=𝒥(τ) denotes the set of UGVs with idle WPT facilities during time window τ, where 𝒥(τ)⊆𝒥.
The social surplus is defined as the overall utility of involved entities <cit.>, i.e.,
𝒮 = ∑_i∈ℐ𝒰_i + ∑_j∈𝒥𝒰_j = ∑_i∈ℐ'∑_j∈𝒥'β_i,j q_j Φ̅_i .
Besides, the UAV recharging auction should be strategy-proof and envy-free to prevent strategic entities and ensure market fairness, whose formal definitions are given as below.
The UAV recharging auction 𝔸 satisfies strategy proofness if the following two properties hold <cit.>:
* (i) Individual rationality (IR): both UAVs and UGVs acquire non-negative utilities in the auction, i.e., 𝒰_i≥ 0, ∀ i ∈ℐ' and 𝒰_j≥0, ∀ j ∈𝒥.
* (ii) Individual compatibility (IC): each UAV can obtain its maximum utility when truthfully choosing its bid strategy, i.e., 𝒰_i(Φ̅_i,b_-i) ≥𝒰_i(b_i',b_-i), ∀ b_i' ≠Φ̅_i, i ∈ℐ'.
The UAV recharging auction 𝔸 is energy-free if no UAV is happier to exchange its allocation with another UAV to improve its utility when the auction ends <cit.>, i.e.,
𝒰_i(g(i),p_g(i)) ≥𝒰_i (g(k),p_g(k)), k≠ i, ∀ i,k∈ℐ',
where g(i) is the identity of allocated UGV to UAV i and p_g(i) is the corresponding payment of UAV i∈ℐ'.
In VWRUNs, the online charging scheduling and pricing (OCSP) problem is to maximize the social surplus while meeting practical constraints, i.e.,
𝐏1:
max_β, p∑_i∈ℐ'∑_j∈𝒥'β_i,j q_j Φ̅_i ,
s.t.
∑_i ∈ℐ' β_i,j ≤1
∑_j ∈𝒥' β_i,j ≤1
β_i,j = {0,1},p_j(b) ≥0, ∀i ∈ ℐ',∀j ∈ 𝒥'
∑_u=1^5 α_i^u = 1, ∀i ∈ℐ'
𝒰_i≥0, 𝒰_j≥0, ∀i ∈ℐ' ,∀j ∈𝒥'
𝒰_i(Φ̅_i,b_-i) ≥ 𝒰_i(b_i',b_-i), ∀b_i' ≠ Φ̅_i, i ∈ ℐ'
𝒰_i(g(i),p_g(i)) ≥ 𝒰_i (g(k),p_g(k)), ∀k ≠i.
Remark. In 𝐏1, β = [β_i,j]_I(τ) × J(τ) and p=(p_1,⋯,p_J(τ)) are decision variables. Constraint (<ref>) means that a UGV can only offer charging service for at most one UAV during τ. Constraint (<ref>) implies that a UAV can only recharge at most one UGV during τ. Constraint (<ref>) is the state constraint of UAV i. Constraint (<ref>) is the IR constraint, and constraint (<ref>) is the IC constraint.
Both constraints (<ref>)–(<ref>) refer to the strategy proofness, and constraint (<ref>) corresponds to the envy freeness.
The OCSP problem 𝐏1 is NP-hard.
Based on <cit.>, the relaxed version of problem 𝐏1 with constraints (<ref>)–(<ref>) and the constant payment p is a typical set cover problem and is proved to be NP-hard. As such, the raw OCSP problem 𝐏1 is NP-hard. Theorem <ref> is proved.
§.§ Strategy-Proof and Envy-Free Auction Mechanism
As the OCSP problem 𝐏1 is NP-hard, in this subsection, we devise a practical heuristic auction mechanism to derive its near-optimal solution with polynomial complexity, while satisfying strategy proofness and envy freeness.
The key phases in our proposed online auction mechanism is presented in Algorithm <ref>.
Specifically, phase 1 (lines 1–4) evaluates the type information of UAVs and UGVs in the time window τ; phase 2 (lines 5–10) determines the bidding strategy of each UAV and the auction winners; phase 3 (lines 11–13) determines the payment of each UAV in the winner set 𝒲; and phase 4 (lines 14–16) performs wireless charging for each pair of matched UAV and UGV in the winner set and enforces the financial settlement.
Remark. In Eq. (<ref>), 𝒮_ℐ”\{g(j)}^𝒥” means the social surplus when UAV i leaves the auction, and 𝒮_ℐ”\{g(j)}^𝒥”\{j} is the actual social surplus when UAV i participates in the auction without UGV j. Via recursive operations, the payment for UGV j<min{I(τ),J(τ)} can be rewritten as:
p_g(j)(b) = (q_j - q_j+1)b_g(k+1) + p_g(j+1)(b).
In the following, we analyze the desirable properties of the proposed auction mechanism in terms of strategy-proofness, envy-freeness, allocation stability, and computational complexity in the following theorems and corollaries.
In the proposed auction mechanism, participants always attain non-negative utilities; and the truth-telling bidding strategy is the dominant equilibria for all participating UAVs, i.e., 𝔸 is strategy-proof.
According to Definition <ref>, it suffices to prove that both IR and IC hold in 𝔸. We first prove the IR. Obviously, according to Eqs. (<ref>)–(<ref>), the utilities of UAVs and UGVs equal to zero if they do not participate in the auction 𝔸. For the participating UGVs, as the payments to them are always non-negative, their utilities are no less than zero. For any participating UAV i=g(j), its utility can be reformulated as:
𝒰_g(j) =q_j b_g(j) + 𝒮_ℐ”\{g(j)}^𝒥”\{j} - 𝒮_ℐ”\{g(j)}^𝒥”
= ∑_l=1^|𝒥”|q_l b_g(l) - 𝒮_ℐ”\{g(j)}^𝒥”
= ∑_l=j^|𝒥”|-1q_l (b_g(l) - b_g(l+1)) + q_|𝒥”| b_g(|𝒥”|)
≥ 0.
Thereby, for all participating UAVs and UGVs, their utilities are always non-negative. Next, we prove the IC. Notably, the payment decision strategy in our proposed auction mechanism follows the Vickrey–Clarke–Groves (VCG) mechanism. As truth-telling is a well-known property of the VCG mechanism <cit.>, our auction 𝔸 also satisfies the truth-telling property (i.e., IC). Theorem <ref> is proved.
Remark. Theorem 2 shows that our proposed auction algorithm can resist strategic UAVs/UGVs and prevent market manipulation in practical energy recharging services by enforcing IC constraints. Besides, as IR constraints are satisfied, individual UAVs/UGVs can be motivated to join the energy recharging system to gain benefits.
The proposed auction 𝔸 is envy-free, if (i) b_g(j)∈[Φ̅_g(j), Φ̅_g(j-1)], 1<j≤ K, and (ii) g(j) = j, 1≤ j≤ K hold.
According to Definition <ref>, it suffices to prove that any UAV g(j)∈𝒲 is indifferent between charging at UGV j at price p_g(j) and charging at UGV l at price p_g(l), where j l. Without loss of generality, we consider the following two cases.
Case 1: j>l. In this case, as b_g(j)≥Φ̅_g(j) and q_j≤ q_l, we have ( q_j-q_l ) Φ̅ _g( j )≥( q_j-q_l ) b _g( j ) and ( q_j-2-q_j-1)b _g( j-1 )≥( q_j-2-q_j-1)b _g( j ). The utility difference of UAV g(j) between charging at UGV j and charging at UGV l is:
Δ𝒰_j,l = q_jΦ̅ _g( j )-p_g( j )-[ q_lΦ̅ _g( j )-p_g( l )]
=∑_k=l^j-1( q_k-q_k+1) b_g( k+1 )+( q_j-q_l ) Φ̅ _g( j )
≥∑_k=l^j-1( q_k-q_k+1) b_g( k+1 )+( q_j-q_l ) b_g( j ).
If j = l + 1, the following inequality holds:
Δ𝒰_j,l≥( q_j-1-q_j ) b_g( j ) + ( q_j-q_j-1) b_g( j )≥ 0.
If j > l + 1, via recursive operations, we have
∑_k=l^j-1( q_k-q_k+1) b_g( k+1 )+( q_j-q_l ) b_g( j )
≥∑_k=l^j-2( q_k-q_k+1) b_g( k+1 )+( q_j-1-q_l ) b_g( j )
≥⋯≥( q_l-q_l ) b_g( j )=0.
Hence, it can be concluded that Δ𝒰_j,l≥ 0.
Case 2: j<l. In this case, as b_g(j)≤Φ̅_g(j-1) and q_j≥ q_l, we have ( q_j-q_l) Φ̅_g(j)≥( q_j-q_l) b_g( j+1 ) and ( q_l-q_l-1) b_g( l )≥( q_l-q_l-1) b_g( l-1 ).
Then, we can obtain:
Δ𝒰_j,l = q_jΦ̅ _g( j )-p_g( j )-[ q_lΦ̅ _g( j )-p_g( l )]
=∑_k=j^l-1( q_k+1-q_k ) b_g( k+1 )+( q_j-q_l ) Φ̅ _g( j )
≥∑_k=j^l-1( q_k+1-q_k ) b_g( k+1 )+( q_j-q_l ) b_g( j+1 ).
If j = l - 1, the following inequality holds:
Δ𝒰_j,l≥( q_j+1-q_j ) b_g( j+1 ) + ( q_j-q_j+1) b_g( j+1 )≥ 0.
Otherwise, if j < l + 1, via recursive operations, we have
∑_k=j^l-1( q_k+1-q_k ) b_g( k+1 )
≥∑_k=j^l-2( q_k+1-q_k ) b_g( k+1 ) + ( q_l-q_l-1)b_g( l-1 )
≥∑_k=j^l-3( q_k+1-q_k ) b_g( k+1 ) + ( q_l-q_l-2)b_g( l-2 )
≥⋯≥( q_l-q_j ) b_g( j+1 ).
Hence, Δ𝒰_j,l≥ 0 holds. Theorem <ref> is proved.
In the proposed auction mechanism, the truth-telling equilibrium is also an envy-free Nash equilibrium (EFNE).
As the truth-telling equilibrium satisfies b_g(j) = Φ̅_g(j) and b_g(j) < Φ̅_g(j-1), given g(j) = j (1≤ j≤ K), then we have Δ𝒰_j,l≥ 0 if j>l and Δ𝒰_j,l> 0 if j<l. Thereby, the truth-telling equilibrium is an EFNE. Corollary <ref> is proved.
The outcome of the proposed auction mechanism is a stable assignment.
It suffices to prove that no UAV can gain an improved profit by aborting the assigned UGV in the auction and re-matching with another UGV for battery recharging. According to the Theorem <ref>, in both cases, we can derive q_jΦ̅ _g( j )-p_g( j )-[ q_lΦ̅ _g( j )-p_g( l )] ≥ 0 under given constraints. Hence, the assignment outcome produced by our proposed auction mechanism is stable. Corollary <ref> is proved.
The overall computational complexity of the proposed auction mechanism yields 𝒪(I(τ)log(I(τ)) + J(τ)log(J(τ)) + |W(τ)|^2).
In the phase 1 of the auction mechanism, the computational complexity for type evaluation of UAVs and UGVs is 𝒪(I(τ) + J(τ)). In the next phase 2, the sorting process of UGVs' types and UAVs' valuations yields a complexity of 𝒪(I(τ)log(I(τ)) + J(τ)log(J(τ))), while the allocation process has a complexity of 𝒪(K), where K = min{I(τ),J(τ)}. In the last phase 3, the payment determination process for winners has a complexity of 𝒪(|W(τ)|^2), where |W(τ)| is the number of winners in an auction with time window τ. Thereby, the overall computational complexity yields 𝒪(I(τ)log(I(τ)) + J(τ)log(J(τ)) + |W(τ)|^2). Theorem <ref> is proved.
§ PERFORMANCE EVALUATION
In this section, we first introduce the simulation settings, then we discuss the numerical results.
§.§ Simulation Setup
We consider a 3D simulation area of 5000×5000×10 m^3, where the sensing spot locates at the center of the area and the sensing task area is a circle with radius 200 m. UAVs are flying over the sensing task area with constant altitudes in [5,10] m. UGVs are located outside the sensing task area, and their distances to the sensing spot are uniformly distributed in [0.3,2.5] km. One base station with location (3000,3000,50) offers wireless communication services for UAVs and UGVs at the considered area. The time window in the auction is set as τ=8 seconds. UAV's battery capacity is set as C_i = 97.58 Wh <cit.>, and the alert battery SoC level is s_min=20%. The current battery SoC of UAVs follows the uniform distribution in [30%,100%]. The wireless energy efficiency of UGVs is set as η_j = 80%. The velocity of UGVs varies between 20 km/h and 60 km/h. The maximum flying velocity of UAV is set as v_max = 10 m/s. UAV's flying power parameters and channel parameters are set according to <cit.>. The normalized QoRS is adopted, and UGV's QoRS is computed according to its normalized distance to the sensing spot. The rendezvous for each matched pair of UAV and UGV is generated at the midpoint between the sensing spot and the corresponding UGV. The linear function is adopted for UAV's valuation modelling in recharging, i.e., Φ_i[t] = μ_0 + μ_1 ρ_i[t], where the parameters are set as μ_0=1 and μ_1=5.
We compare the proposed scheme with the following two conventional schemes.
* Exhaustive optimal scheme: it exhaustively searches the optimal allocation outcomes for UAVs and UGVs in the OCSP problem 𝐏1 without the envy-free constraint (<ref>).
* Static WPT scheme: it replaces the UGVs with static WPT facilities in the simulation area for UAV charging services. The auction process between UAVs and static WPT facilities is similar to our proposed auction between UAVs and UGVs.
§.§ Numerical Results
In Figs. <ref>–<ref>, we evaluate the satisfaction level of UAVs, utility of UAVs, and social surplus in the proposed scheme, compared with the conventional schemes. Then, we evaluate the effect of auction time window in Fig. <ref>. After that, in Tables <ref>–<ref>, we evaluate the strategy proofness and envy freeness of the proposed scheme. The satisfaction level of UAVs is defined as:
SL =∑_i=1^I(τ)∑_j=1^J(τ)β_i,jq_j ρ_i(τ), ρ_i(τ) =⌊τ/t⌋^-1·∑_t∈τρ_i[t].
Besides, the non-envy ratio is adopted to measure the envy freeness of the assignment outcomes, which is defined as the number of UAVs that does not envy the other UAV's assignment to the total number of participating UAVs in the auction.
Fig. <ref> and Fig. <ref> show the satisfaction level and utility of UAVs in three schemes, respectively, when the number of UGVs in the auction grows from 6 to 14. In these two simulations, the number of UAVs in the auction is set as 10.
As seen in these two figures, the proposed scheme outperforms the static WPT scheme in attaining a smaller gap with the exhaustive optimal approach. It can be explained as follows.
Compared with the static WPT facilities, our proposed scheme utilizes mobile UGVs to dynamically generate the rendezvouses for UAV launching and battery recharging, thereby saving the flying cost for UAVs. Besides, in both figures, UAVs' satisfaction level and total utility are increasing when the number of UGVs is increasing.
The reason is when more UGVs participating in the auction, each UAV can have a higher chance to match a more preferred UGV for battery recharging. Thereby, UAVs can enjoy a higher satisfaction level of charging service and obtain higher utilities.
Fig. <ref> depicts the social surplus in three schemes, where the number of UGVs in the auction varies between 6 and 14. Here, the number of UAVs in the auction is 10. From Fig. <ref>, it can be observed that the proposed scheme attains a higher social surplus than the static WPT scheme. The reason is that in the static WPT scheme, the WPT facilities in the simulation area are static, causing a higher round-trip cost for UAVs than our UGV-assisted WPT scheme. Additionally, given more UGVs, as the chances for both UAVs and UGVs to match their more preferred parter can be higher, the overall utility of UAVs and UGVs (i.e., social surplus defined in Eq. (<ref>)) is greater.
Fig. <ref> illustrates the utility of UAVs in the proposed scheme when both the auction time window τ and number of UGVs vary. As seen in Fig. <ref>, the longer auction time window can result in higher UAV utilities when the number of UGVs is fixed. The reason is that, in our proposed auction scheme, the bid collection process ends if the time window τ elapses. As such, given the longer auction time window, more bids of UAVs and UGVs can be included in the current auction to help them make better energy matching choices. Besides, the utility of UAVs grows with the number of UGVs, which has been analyzed in Fig. <ref>.
Table <ref> compares the UAV utility under truthful bidding and untruthful bidding in the proposed auction. As seen in Table <ref>, the utility of the randomly selected UAV when bidding truthfully is greater than that in untruthfully bidding under both small-scale and large-scale auctions. It indicates that bidding truthfully is the dominant strategy for UAVs, which validates the strategy proofness of our auction mechanism and conforms to Theorem <ref>.
Table <ref> compares the ratio of non-envy UAVs in two schemes under small-scale auction (i.e., I(τ)=5,J(τ)=5) and large-scale auction (i.e., I(τ)=20,J(τ)=20). As observed in Table <ref>, the proposed scheme outperforms the exhaustive optimal approach and enforces envy freeness for all UAVs under both small-scale and large-scale auctions, which also conforms to the theoretical results in Theorem <ref>.
§ CONCLUSION
UAV's limited flight endurance is one of the main impediments to modern UAV applications. By leveraging ground vehicles mounted with WPT facilities on their proofs, this paper has proposed a mobile and collaborative recharging scheme for UAVs to facilitate on-demand wireless battery recharging. An energy scheduling problem for multiple UAVs and multiple vehicles has been formulated under practical constraints and energy competitions in the highly dynamic network. We have also devised an online auction-based solution with low complexity to allocate and price idle wireless chargers on vehicular proofs in real time. Theoretical analyses have proved that the proposed scheme produces strategy-proof, envy-free, and stable allocation outcomes. Lastly, numerical results validate the effectiveness of the proposed scheme in delivering more satisfactory UAV charging services. For the future work, we plan to improve the auction efficiency and investigate the charging scheduling mechanism under more complex situations.
IEEETran
|
http://arxiv.org/abs/2307.03027v1
|
20230706144407
|
Improving Retrieval-Augmented Large Language Models via Data Importance Learning
|
[
"Xiaozhong Lyu",
"Stefan Grafberger",
"Samantha Biegel",
"Shaopeng Wei",
"Meng Cao",
"Sebastian Schelter",
"Ce Zhang"
] |
cs.LG
|
[
"cs.LG",
"cs.CL",
"cs.IR"
] |
Modelling the response of a CsI(Tl)-PiN photodiode Microscintillator Detector
[
August 1, 2023
=============================================================================
Retrieval augmentation enables large language models to take advantage of external knowledge, for example on tasks like question answering and data imputation. However, the performance of such retrieval-augmented models is limited by the data quality of their underlying retrieval corpus. In this paper, we propose an algorithm based on multilinear extension for evaluating the data importance of retrieved s.
There are exponentially many terms
in the multilinear extension, and one
key contribution of this paper is a
polynomial time algorithm that
computes exactly, given a retrieval-augmented model with an additive utility function and a validation set,
the data importance of s in the retrieval corpus using the multilinear extension of the model's utility function.
We further proposed an even more efficient (ϵ, δ)-approximation algorithm.
Our experimental results illustrate that we can enhance the performance of large language models by only pruning or reweighting the retrieval corpus, without requiring further training. For some tasks, this even allows a small model (e.g., GPT-JT), augmented with a search engine API, to outperform GPT-3.5 (without retrieval augmentation).
Moreover, we show that weights based on multilinear extension can be computed efficiently in practice (e.g., in less than ten minutes for a corpus with 100 million elements).
§ INTRODUCTION
Large language models (LLMs) consisting of neural networks with billions of parameters and trained on vast quantities of unlabelled text are the basis of unprecented progress in natural language processing tasks <cit.>. With zero-shot or few-shot prompting, LLMs can be adopted for a wide range of diverse tasks, such as question answering <cit.> summarization <cit.> and data imputation <cit.>.
Drawbacks of large language models LLMs, however, have two widely acknowledged disadvantages <cit.>. Firstly, despite their impressive capabilities, LLMs actually perform badly on tail entities <cit.>, which they have not seen at training time or cannot remember due to limitations of the network capacity. The second drawback is that with the ever-growing number of model parameters, training, and fine-tuning costs are exploding as well. As a rough estimate, it costs $80k - $1.6m to train a 1.5 billion parameter language model <cit.>. This makes it difficult to leverage LLMs for tasks that require regularly updated data or that regularly need to remove privacy-sensitive or copyright-protected data <cit.>.
Retrieval-augmented models To address such problems, retrieval-augmented (RAG) models have recently been proposed <cit.>. A typical retrieval-augmented model consists of two parts, a retriever f_ret and a generator f_gen. Given a retrieval corpus 𝒟_ret = {d_1, ⋯, d_M}, the retriever f_ret retrieves K for an input x_i as f_ret(x_i, 𝒟_ret) = {d_α_1, d_α_2, ..., d_α_K}. Here, α_k denotes the rank of each in the retrieval corpus assigned by the retriever. The generator f_gen then generates its prediction based on the input and the retrieved as evidence f_gen(x_i, f_ret(x_i, 𝒟_ret)). Recent research indicates that incorporating external knowledge into LLMs improves their performance for various tasks and allows them to easily adapt to new knowledge <cit.>.
Impact of data quality on retrieval-augmented LLMs The performance of retrieval-augmented models is highly limited by the quality of the retrieved . For example, GPT-3 is able to give the correct answer “Frank Herbert” to the question “Who is the author of Old Rambling House?” with the help of a retrieved Wikipedia page <cit.>, which contains the sentence "Old Rambling House is a short story by American science fiction author Frank Herbert." However, it would with a high probability give the wrong answer if the retrieved page contained incorrect text such as “Old Rambling House is a short story by American science fiction author J. R. R. Tolkien.”
Retrieval corpora are rarely clean in reality (especially if the underlying data comes from the web), and the origin of noise and errors in the data is difficult to track down <cit.>. For example, according to recent estimates, 8.0% to 38.5% of labels in real-world datasets are corrupted <cit.>. In the domain of natural language processing, which relies on raw text, the rapidly growing number of use cases and an increasing amount of text have especially exacerbated data quality issues <cit.>.
Learning the data importance of retrieval sources Given this data quality problem, we propose to improve retrieval-augmented models by learning the data importance of retrieval sources. Let U( ·) be the utility function of a retrieval-augmented model with a validation set 𝒟_val = {x_1, x_2, ..., x_N}, and let 𝒟_ret = {d_1, ⋯, d_M} be the underlying retrieval corpus of M . The performance of the model can be written as:
U(f_gen, f_ret, 𝒟_val, 𝒟_ret) := ∑_x_i ⊆𝒟_val
U( f_gen(x_i, f_ret(x_i, 𝒟_ret)) )
Our goal to is find a subset 𝒮 of the retrieval corpus 𝒟_ret that maximizes the utility function U(f_gen, f_ret, 𝒟_val, 𝒮). We leave out f_gen, f_ret, and 𝒟_val from the notation and use U(𝒮) for readability. It is hard to solve this combinatorial optimization problem since it requires enumerating exponentially many possible subsets 𝒮. One natural way is to change this problem to an optimization problem on continuous functions. Therefore, we define the multilinear extension of the utility function as:
Ũ(w_1,⋯,w_M) := ∑_𝒮⊆𝒟_ret
U( 𝒮)
∏_d_i ∈𝒮 w_i
∏_d_i ∉𝒮 (1 - w_i)_P[𝒮]
Here, P[𝒮] denotes the probability of the sampled retrieval corpus 𝒮⊆𝒟_ret based on the weights w_1,⋯,w_M. Our goal is to find the optimal weights w_1,⋯,w_M that maximize the multilinear extension of the utility function:
max_w_1,⋯,w_M ∈ [0, 1]Ũ(w_1,⋯,w_M)
The optimal weights can be found with textbook optimization methods like gradient descent. This, however, requires enumerating exponentially many sample sets, making the problem infeasible in practice. We tackle this challenge with the following main contributions of this paper:
* We present an efficient algorithm to compute weights for a large family (but not all) of retrieval-augmented models with additive utility functions. Our algorithm has polynomial time complexity and does not depend on the retrieval corpus size (Sections <ref> & <ref>), even
given that there are exponential many terms in Equation <ref>.
* We introduce an efficient estimation algorithm to compute the (ϵ,δ)-approximation of weights for a large family of retrieval-augmented models (<Ref>).
* We experimentally demonstrate that retrieval augmentation and data evaluation based on multilinear extension improve the performance of large language models in question answering and data imputation tasks. The experiments demonstrate that with external retrieval knowledge, small language
models can yield comparable performance to large language models. Furthermore, our evaluation shows that weights based on multilinear extension can identify noisy data and help models adapt to new sources of knowledge (<Ref>).
* Our implementation of the algorithm illustrates that weights based on multilinear extension can be calculated very fast in practice, even for a large corpus with 100 million (<Ref>).
* We provide the source code of our implementation and experiments under <https://github.com/amsterdata/ragbooster>.
§ ALGORITHMS FOR DERIVING GRADIENTS
We can find the optimal weights for the multilinear extension of the utility function via computing the gradient of a particular weight w_i based on a validation set 𝒟_val:
∂Ũ/∂ w_i =
∑_𝒮⊆𝒟_ret\ d_i(
U(𝒮∪{d_i})
- U(𝒮) )
· P[𝒮]
=
∑_x_val∈𝒟_val1/|𝒟_val|·∑_𝒮⊆𝒟_ret\ d_i(
U_x_val(𝒮∪{d_i})
- U_x_val(𝒮) )
· P[𝒮]_G(x_val, w_i)
= 1/|𝒟_val|·∑_x_val∈𝒟_val
G(x_val, w_i)
Infeasability of a naive implementation However, computing the gradients in <Ref> is challenging. A naive implementation would have to enumerate all possible subsets 𝒮 for each validation tuple x_val∈𝒟_val to compute the contribution of this subset 𝒮 to the gradient value G(x_val, w_i). Such a naive implementation is infeasible in practice due to its inherent exponential time complexity.
Efficient weight computation for retrieval-augmented models As discussed before, we focus on a specific family of machine learning models, called retrieval-augmented (RAG) models. Retrieval-augmented models benefit from locality: the predictions of retrieval-augmented models for an input sample are only determined by the Top-K closest s in the retrieval corpus and the answer generator. Combined with additive utility functions (which are common for both classical KNN and state-of-the-art RAG models), this allows us to efficiently compute exact gradients within polynomial time complexity
(<Ref> and <Ref>). In <Ref>, we show that we only have to consider a small subset of for each validation tuple and that the time complexity only depends on K instead of the retrieval corpus size M if we apply an ϵ-approximation. Finally, we propose an (ϵ,δ)-approximation algorithm in <Ref> to calculate gradients for general utility functions.
§.§ Exact Gradient Calculation for Models with an Additive Utility Function
A textbook K-nearest neighbor classifier and many state-of-the-art retrieval-augmented models <cit.> can be viewed as models with additive utility functions. In this section, we present a polynomial time complexity algorithm to compute the exact gradient of the weights of the multilinear extension of the utility function. We follow existing work <cit.> to define the additive utility function of a retrieval-augmented model as:
U_x_val(𝒮) = 1/K∑_k=1^min(K,|𝒮|)U_x_val(f_gen(d_α_k^x_val(𝒮)))
Here, α_k^x_val(𝒮) represents the index of the , which is the kth closest to x_val among all the s retrieved by f_ret from 𝒮. From now on, we abbreviate α_k^x_val(𝒮) to α_k. U_x_val(f_gen(d_α_k^x_val(𝒮))) denotes the utility function for the output generated based on the validation tuple x_val and the single d_α_k^x_val. We assume that the possible values of U(·) function are within a countable finite set 𝒱, where |𝒱| = V, and leave out f_gen from the notation for readability in the following. In this scenario, we can provide an algorithm with PTIME time complexity in <Ref>. The overall time complexity of the algorithm is 𝒪(N·(M logM + M K^2 + M K V ))
§.§ ϵ-approximation Algorithm for Calculating Exact Gradient Values
The overall time complexity for computing gradients for models with an additive utility function is 𝒪(N·(M logM + M K^2 + M K V )).
In this section, we show that if we are allowed to do ϵ-approximations, we can significantly speed up the calculation of the gradients ∂Ũ/∂ w_i. We will only introduce the main idea here, leaving the details in <Ref>.
If we calculate the ϵ-approximation Ĝ(x_val, w_i) for the each G(x_val, w_i), we can get the ϵ-approximation for ∂Ũ/∂ w_i as the average of Ĝ(x_val, w_i).
See <Ref>.
Our next step is to detail how to compute the ϵ-approximation for G(x_val, w_i).
One observation is that the absolute value G(x_val, w_i) is bounded by the sum of the probabilities of the d_i in the K-nearest neighbor set of x_val. Notice that for a with a lower rank, the probability of it being in the K-nearest neighbor set is smaller. Therefore we can define the boundary point d_b of the retrieval corpus.
(Boundary Point)
Given a validation tuple x_val and the retrieval corpus 𝒟_ret = {d_1, ⋯, d_M} ranked with respect to x_val, the boundary point d_b is the with the highest rank in the sorted corpus such that any that has a lower rank than d_b has a probability less than ϵ to be in the K-nearest neighbor set of x_val.
In practice, after we rank the corpus with respect to a validation tuple, we can use binary search to find the boundary point. After we find this boundary point d_b, we can use 0 as the ϵ-approximation for the gradient for s with a lower rank as Ĝ(x_val, w_i) = 0 for i ∈{b, ..., M}. It is because the probability of those s being in the K-nearest neighbor set is less than ϵ. In the following, we will show the approximation for s with a higher rank.
Given the validation tuple x_val, the retrieval corpus 𝒟_ret = {d_1, ..., d_M}, the boundary point d_b, and the weights W = {w_1, ..., w_M}, if we have an algorithm 𝒜 to calculate the G(x_val, w_i) = 𝒜(x_val,𝒟_ret, W ), then Ĝ(x_val, w_i) = 𝒜(x_val,{d_1, ..., d_b},{w_1, ..., w_b}) is the ϵ-approximation for G(x_val, w_i).
See <Ref>
From <Ref>, we can compute the ϵ-approximation for every by discarding the outlier points {d_b, d_b+1, ..., d_M}. This reduces the time complexity from 𝒪(N·(M logM + M K^2 + M K V )) to 𝒪(N·(B logB + B K^2 + B K V )) where B is the rank of the boundary point.
If the value of all w_i is greater than a certain constant λ, then the index of the boundary point B is 𝒪(K).
See <Ref>
The above theorem shows that if all weights W are greater than a certain constant, the scale of B is only related to K instead of the size of the retrieval corpus M. It means that even though we may have millions of in the retrieval corpus, we only have to consider O(K) with the highest rank for a validation tuple. The overall time complexity for computing the approximate gradients for models with additive utility functions is 𝒪(N·(K logK + K K V )) = 𝒪(N· K^2 · V). This significantly speeds up their computation.
§.§ (ϵ, δ)-approximation Algorithm for Models with General Utility Functions
Next, we provide a solution for efficiently approximating gradients for retrieval-augmented models with a general utility function. According to what we proposed in the previous section, for every validation tuple, we can find the boundary point of the retrieval corpus. When a point has a smaller rank score than the boundary point, the epsilon approximation is 0. Using the Markov chain Monte Carlo method, we can calculate an approximation of the gradients for a retrieval-augmented model with a general utility function. In light of the fact that 0 is the approximate value for most points, we only need to perform MCMC on a small number of data points. Detailed proofs and algorithms are provided in <ref>.
§.§ Projected Gradient Descent for Weights on a Data Source Level
Exact gradients for a grouped retrieval corpus In the previous section, we introduced the algorithm for calculating gradients for weights for the multilinear extension of the utility function. We also proved that each validation tuple only contributes gradients to a small part of the retrieval corpus. A further problem is how to evaluate the quality of the which are not retrieved for the validation tuples. In real-world ML applications, a retrieval corpus is commonly generated from various data sources. For example, in the retrieval corpus may come from the same labeler, the same websites, or the same database. As a consequence, we can evaluate data quality at this level, which we call the source level. This has the additional advantage that we do not have to inspect every data point before identifying if the data is useful. We formulate the corresponding problem as follows. Given a series of data sources for the retrieval corpus 𝒪_ret = {o_1, o_2, ..., o_M}, the generated retrieval corpus can be represented as a function of these sources D_ret = ⋃_i=1^Mf_source(o_i). We detail how to compute the exact gradient of the weights for the K-Nearest Neighbor classifier and a grouped corpus in <Ref>. The time complexity of the algorithm is 𝒪(N · T^2· M^2), where T is the size of the generated retrieval corpus.
Projected gradient descent for weights on a grouped corpus In general, given the retrieval corpus and the validation set, we can use a textbook batch gradient descent algorithm to find the optimal weights for the s in the retrieval corpus. From the previous paragraph, we can see that computing the exact gradient values for a grouped retrieval corpus with several data sources can be computationally expensive. Therefore, we propose a projected gradient descent algorithm to efficiently learn the optimal weights for retrieval corpus generated from data sources. Given the generated retrieval corpus represented as a function of the sources {o_1, o_2, ..., o_M}, D_ret = ⋃_i=1^Mf_source(o_i), we assign a weight to each in the generated retrieval corpus D_ret. Suppose there are m_i s in f_source(o_i), we assign the weights {w_i,1.w_i,2, ..., w_i, m_i} to each in f_source(o_i). The original optimization problem can be relaxed to a constrained optimization problem as detailed below:
max_w_1,1,⋯,w_M,m_M∈ [0, 1]Ũ( w_1,1,..., w_M,m_M)
[ s.t. w_1,1 = w_1, 2 = ⋯ = w_1, m_1; w_2,1 = w_2, 2 = ⋯ = w_2, m_2; ⋯; w_M,1 = w_M, 2 = ⋯ = w_M, m_M; ]
To find the optimum of this function, we use the existing algorithm for a non-grouped corpus to compute the gradient of the weight for each w_i,j. After we update the parameters using the gradients, we project the updated w_i,j to satisfy the constraints by computing ŵ_̂î = 1/α∑w_i,j and set every w_i,j to ŵ_̂î. Therefore, we can utilize the algorithm introduced in <Ref> to calculate the gradient and then compute the average. For retrieval-augmented models with additive utility functions, the time complexity becomes 𝒪(N· K^2 · V + T).
§ EXPERIMENTAL EVALUATION
We conduct a series of experiments for question answering and data imputation tasks. We confirm in <Ref> that retrieval augmentation enhances the performance of large language models. <Ref> and <Ref> show that the multilinear extension weights help us identify noisy/incorrect data in the retrieval corpus, and that pruning or reweighting the retrieval corpus accordingly improves performance without the need to fine-tune the underlying model. The runtime of the algorithm is examined in <Ref>, where we showcase that the weights can be computed very fast in practice. We provide the source code of our implementation and experiments under <https://github.com/schelterlabs/retrieval_importance>.
Datasets and tasks For question answering, we leverage the <cit.> dataset, in which questions are extracted from Wikipedia pages using relation pairs. The answer to each question in this dataset can be found on Wikipedia. For example, for the relation "author", a question is "The author of Nimmer on Copyright is ?". We filter out relations with less than 500 questions and use each of the remaining 70 relations as a separate downstream task. In data imputation, the task is to predict missing values of a relational table <cit.>. We experiment with two common benchmark datasets for this task: , where the column of a table about restaurants must be imputed, and , where we have to impute the column in a table about electronics products. For each experimental run on a question answering or data imputation task, we randomly split the dataset into validation dataset and test dataset with an equal number of tuples. We repeat this for 64 different random seeds, and report the mean accuracy. For the zero-shot baselines in the imputation tasks, we use the prompts suggested in <cit.>.
Language models We leverage the language model GPT-JT <cit.> with 6 billion parameters, which we enhance with retrieval augmentation. As a reference, we compare this to the language model “text-davinci-003” (to which we refer to as ) from OpenAI's commercial GPT-3.5 family <cit.>. For both language models, we generate predictions with zero-shot or few-shot prompting, without further fine-tuning.
Retrieval augmentation We leverage the Microsoft Bing search engine <cit.> to generate a retrieval corpus for each task. We create a query from each validation/test sample (e.g., the question to answer) and retrieve the first 50 websites together with their textual snippets provided by Bing as retrieved for the sample. We sort these according to the ranking score provided by Bing. We create a few-shot prompt from each retrieved , and generate an answer for the corresponding validation sample via GPT-JT. We decide on the final prediction via a majority vote using the generated answers from the top-K websites.
Reweighting or pruning the retrieval corpus In experiments which reweight or prune the retrieval corpus based on multilinear extension weights, we proceed as follows. We choose K = 10 and set the initial weight to 0.5. We group the retrieved websites by their domain name, and run the projected gradient descent algorithm from <Ref> for 50 iterations with a learning rate of 500 on the validation dataset to compute the optimal weights. Next, for reweighting, we compute the expectation of the accuracy on the test set by randomly sampling the retrieved s 32 times based on the learned weights to form the retrieval corpus. For pruning, we remove retrieved s with a learned weight below a certain threshold (tuned on the validation set) before computing predictions on the test set via majority vote. We use the leave-one-out (LOO) error as a baseline to refine the retrieval corpus. We compute the change in accuracy for the removal of each individual data source and finally remove all data sources with a LOO error below a certain threshold (tuned on the validation set) before computing predictions on the test set.
§.§ Benefits of Retrieval Augmentation
Experimental setup The aim of this experiment is to confirm the well-known fact that retrieval augmentation alone already enhances the performance of language models. We compare the performance of without retrieval augmentation to the performance of GPT-JT with retrieval augmentation on the question answering and data imputation tasks.
Results and discussion The results for question answering are shown in <Ref>. The mean accuracy of over all 70 relations is 0.33, which outperforms the mean accuracy of 0.21 achieved by GPT-JT standalone. However, retrieval augmentation raises the mean accuracy of GPT-JT to 0.33, making it competitive with the 30x larger . The smaller model even outperforms the larger model in the majority of relations (39 out of 70, detailed results available in <Ref>). We encounter the analogous behavior for data imputation in <Ref>, where retrieval augmentation (vanilla) makes the small GPT-JT model competitive with the 30x larger model, and even outperforms it on both datasets.
§.§ Improving Performance with Multilinear Extension Weights
Experimental setup Next, we showcase that pruning or reweighting the retrieval corpus based on multilinear extension weights importance improves performance without having to fine-tune the underlying model. We group the retrieved websites by domain and refine the corpus as detailed earlier.
Results and discussion The results for question answering are shown in Table <ref> (detailed results in<Ref> and <Ref>), and confirm that reweighting and pruning using the learned weights increases test accuracy. The mean accuracy of the GPT-JT model retrieval augmentation increases from 33.3% to 37.7% after pruning (and to 36.9% after reweighting) using the multilinear extension weights, and it clearly outperforms the state-of-the-art model with 175 billion parameters. In all 70 relations, the performance improved using multilinear extension weights by removing 71.5% of the retrieval corpus on average. Analogously, we find that the performance in the data imputation tasks is improved by pruning based on the learned weights importance as well (<Ref>). For both datasets, the smaller model outperforms by more than 5% in test accuracy. These results confirm that the performance of retrieval-augmented models can be further optimized by evaluating the quality and reliability of real-world data sources in their underlying corpus.
§.§ Mitigating the Impact of Noise in the Retrieval Corpus
Experimental setup The aim of the following experiment is to demonstrate how the learned weights assist us with mitigating the impact of noise in the retrieval corpus. To achieve this, we manually inject noise into the retrieval corpus of the question-answering task as follows. We create five copies of the retrieval corpus for each question with noise levels ranging from 0% to 80% (resulting in around 250 retrieved websites per question, of which 40% are corrupted). To inject noise, we randomly replace the correct answer in the retrieved websites with an incorrect one according to the noise level. Then, for each copy, we randomly split the corpus into ten sources according to rank. Now we have 5 · 10 different sources in total with different noise levels. We expect a performance drop when using the dirty corpus and aim to demonstrate how data evaluation can help us restore performance.
Results and discussion As shown in <Ref>, the performance drops from 33.3% on the clean corpus to 27.0% on the dirty corpus with injected noise. Using the leave-one-out error to remove noise sources improves performance to 31.1%. Both reweighting and pruning using learned weights drastically improve the performance on the dirty corpus and both enhance the performance by over 33.0% on the dirty corpus. Pruning even results in a better performance of 33.5% compared to the clean corpus without pruning. The results show that even if we are faced with a situation where nearly half of the retrieval corpus is noisy, multilinear extension weights can help the model reach performance comparable to the clean corpus.
[b]0.61
2GPT-JT (6B)
w/ Retrieval
4cGPT-JT (6B) w/ Retrieval + Fabricated Data 2
(175B)
2-5
vanilla +loo +reweight +prune
0.333 0.382 0.399 0.410 0.418 0.339
tableAccuracy impact of additional fabricated data sources for question answering on .
[b]0.35
< g r a p h i c s >
figureRuntime per epoch on corpora with up to 100M elements.
§.§ Handling Auto-Generated Data Sources in the Retrieval Corpus
Experimental setup Next, we illustrate how learned weights allows us to handle new sources in the retrieval corpus for question answering. We manually generate five synthetic Wikipedia pages for each question using the OpenAI “text-davinci” generator. We adopt the real Wikipedia pages as a few-shot example, add the fabricated sources to the retrieval corpus and give them the highest rank among the websites. We aim to show that when new knowledge is added to the corpus, the learned weights help us to utilize the sources based on their quality.
Results and discussion <Ref> shows the results of this experiment. We find that adding fabricated Wikipedia pages to the corpus increases the accuracy from 33.3% to 38.2%. This is due to the fact that the OpenAI model itself can reach 33.9% and most Wikipedia pages contain the correct information if the model memorizes the answer. We see, however (e.g., for the relation "place of death"), that adding generated Wikipedia pages will decrease the performance from 38.3% to 33.8%. Using LOO to prune the retrieval corpus improves performance by 39.9% on average. Reweighting or pruning using the learned multilinear extension weights achieves the highest accuracy of 41.0% and 41.8%, improving the performance on the corpus without fabricated Wikipedia sources. The results show that the learned weights help the model to easily adapt to new knowledge sources without further training.
§.§ Computational Performance
Experimental setup Finally, we illustrate that the weights can be computed very fast in practice. For that, we implement our approach in Rust (with a Python frontend), and apply several performance optimizations to the code such as parallelization, memory pre-allocation and re-use, operator fusion, and predication <cit.>. We run the experiments on consumer hardware (a machine with a four-core Intel i7-8569U CPU @2.80GHz, 16GB of RAM, and MacOS 12.6). We measure the runtime of our implementation on three relations from the dataset (“author”, “place-of-birth”, “currency”), which contain 1,700-2,700 questions each, with 50 corresponding retrieved answers per question. We additionally run experiments on a synthetic retrieval corpus whose size M = N · b we scale up from 50,000 to 100,000,000 (with a validation set size N from 1,000 to 1000,000 times b=[50, 100] retrieved per sample). We run each configuration with one, two, and four threads, repeat each run seven times, and measure the mean execution time per epoch.
Results and discussion For the relations from , a gradient update only takes between two and four milliseconds. We plot the results for the synthetic corpus in <Ref>. The x-axis is the size of the retrieval corpus M = N · b (size N of the validation set times the number of retrieved per sample b) and the y-axis denotes the mean runtime in milliseconds with a logarithmic scale. We see that with all four cores, we can finish an epoch for corpora with up to 10 million elements with a sub-second runtime. Even for the largest corpus with 100 million elements, an epoch can be conducted in 6.3 seconds on consumer hardware. Furthermore, we find that the runtime grows linearly with the size of the retrieval corpus and that our implementation easily benefits from parallelism when multiple cores are utilized. This showcases that data refinement using multilinear extension weights is computationally cheaper than model fine-tuning, which (in many cases) has to conduct an expensive backpropagation of errors through the underlying model.
§ CONCLUSION
We presented efficient algorithms to compute the optimal weights that maximize the multilinear extension of the utility function and use them to refine the retrieval corpus for retrieval-augmented large language models. Overall, our results illustrate that the learned weights are a powerful metric for evaluating the quality of the retrieval corpus and that retrieval-augmented models can be enhanced by only pruning the retrieval corpus without further training the underlying model. Furthermore, the weights can be computed efficiently even for a large retrieval corpus, and allow us to easily adapt predictions in cases where new sources are added to the retrieval corpus.
10
alt2019fine
Christoph Alt, Marc Hübner, and Leonhard Hennig.
Fine-tuning pre-trained transformer language models to distantly
supervised relation extraction.
arXiv preprint arXiv:1906.08646, 2019.
bhaskar2022zero
Adithya Bhaskar, Alexander R Fabbri, and Greg Durrett.
Zero-shot opinion summarization with gpt-3.
arXiv preprint arXiv:2211.15914, 2022.
bommasani2021opportunities
Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney
von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma
Brunskill, et al.
On the opportunities and risks of foundation models.
arXiv preprint arXiv:2108.07258, 2021.
chentvm2018
Tianqi Chen, Thierry Moreau, Ziheng Jiang, Lianmin Zheng, Eddie Yan, Meghan
Cowan, Haichen Shen, Leyuan Wang, Yuwei Hu, Luis Ceze, et al.
Tvm: An automated end-to-end optimizing compiler for deep learning.
OSDI, 2018.
cothey2004web
Viv Cothey.
Web-crawling reliability.
Journal of the American Society for Information Science and
Technology, 55(14):1228–1238, 2004.
devlin2018bert
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.
Bert: Pre-training of deep bidirectional transformers for language
understanding.
arXiv preprint arXiv:1810.04805, 2018.
frenay2013classification
Benoît Frénay and Michel Verleysen.
Classification in the presence of label noise: a survey.
IEEE transactions on neural networks and learning systems,
25(5):845–869, 2013.
guu2020retrieval
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang.
Retrieval augmented language model pre-training.
In International conference on machine learning, pages
3929–3938. PMLR, 2020.
hong2013computing
Yili Hong.
On computing the distribution function for the poisson binomial
distribution.
Computational Statistics & Data Analysis, 59:41–51, 2013.
jia2019efficient
Ruoxi Jia, David Dao, Boxin Wang, Frances Ann Hubis, Nezihe Merve Gurel, Bo Li,
Ce Zhang, Costas J Spanos, and Dawn Song.
Efficient task-specific data valuation for nearest neighbor
algorithms.
arXiv preprint arXiv:1908.08619, 2019.
karlavs2022data
Bojan Karlaš, David Dao, Matteo Interlandi, Bo Li, Sebastian Schelter,
Wentao Wu, and Ce Zhang.
Data debugging with shapley importance over end-to-end machine
learning pipelines.
arXiv preprint arXiv:2204.11131, 2022.
karpukhin2020dense
Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu,
Sergey Edunov, Danqi Chen, and Wen-tau Yih.
Dense passage retrieval for open-domain question answering.
arXiv preprint arXiv:2004.04906, 2020.
lewis2019bart
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed,
Omer Levy, Ves Stoyanov, and Luke Zettlemoyer.
Bart: Denoising sequence-to-sequence pre-training for natural
language generation, translation, and comprehension.
arXiv preprint arXiv:1910.13461, 2019.
lewis2020retrieval
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir
Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim
Rocktäschel, et al.
Retrieval-augmented generation for knowledge-intensive nlp tasks.
Advances in Neural Information Processing Systems,
33:9459–9474, 2020.
liang2022holistic
Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu,
Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar,
et al.
Holistic evaluation of language models.
arXiv preprint arXiv:2211.09110, 2022.
Bing
Microsoft.
Bing web search api, 2023.
narayancan2022
Avanika Narayan, Ines Chami, Laurel Orr, and Christopher Ré.
Can foundation models wrangle your data?
PVLDB, 2022.
neumann2011efficiently
Thomas Neumann.
Efficiently compiling efficient query plans for modern hardware.
Proceedings of the VLDB Endowment, 4(9):539–550, 2011.
openai
OpenAI.
Models - openai, 2023.
radford2018improving
Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al.
Improving language understanding by generative pre-training.
2018.
2020t5
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael
Matena, Yanqi Zhou, Wei Li, and Peter J. Liu.
Exploring the limits of transfer learning with a unified text-to-text
transformer.
Journal of Machine Learning Research, 21(140):1–67, 2020.
cost2020
Or Sharir, Barak Peleg, and Yoav Shoham.
The cost of training nlp models: A concise overview.
04 2020.
siriwardhana2022improving
Shamane Siriwardhana, Rivindu Weerasekera, Elliott Wen, Tharindu Kaluarachchi,
Rajib Rana, and Suranga Nanayakkara.
Improving the domain adaptation of retrieval augmented generation
(rag) models for open domain question answering.
arXiv preprint arXiv:2210.02627, 2022.
song2022learning
Hwanjun Song, Minseok Kim, Dongmin Park, Yooju Shin, and Jae-Gil Lee.
Learning from noisy labels with deep neural networks: A survey.
IEEE Transactions on Neural Networks and Learning Systems,
2022.
strubell2019energy
Emma Strubell, Ananya Ganesh, and Andrew McCallum.
Energy and policy considerations for deep learning in nlp.
ACL, 2019.
tay2022unifying
Yi Tay, Mostafa Dehghani, Vinh Q Tran, Xavier Garcia, Dara Bahri, Tal Schuster,
Huaixiu Steven Zheng, Neil Houlsby, and Donald Metzler.
Unifying language learning paradigms.
arXiv preprint arXiv:2205.05131, 2022.
tay2022transcending
Yi Tay, Jason Wei, Hyung Won Chung, Vinh Q Tran, David R So, Siamak Shakeri,
Xavier Garcia, Huaixiu Steven Zheng, Jinfeng Rao, Aakanksha Chowdhery, et al.
Transcending scaling laws with 0.1% extra compute.
arXiv preprint arXiv:2210.11399, 2022.
enwiki:1140483446
Wikipedia contributors.
Old rambling house — Wikipedia, the free encyclopedia, 2023.
[Online; accessed 25-April-2023].
yuan2022decentralized
Binhang Yuan, Yongjun He, Jared Davis, Tianyi Zhang, Tri Dao, Beidi Chen,
Percy S Liang, Christopher Re, and Ce Zhang.
Decentralized training of foundation models in heterogeneous
environments.
Advances in Neural Information Processing Systems,
35:25464–25477, 2022.
zamani2022retrieval
Hamed Zamani, Fernando Diaz, Mostafa Dehghani, Donald Metzler, and Michael
Bendersky.
Retrieval-enhanced machine learning.
SIGIR, 2022.
§ EXACT GRADIENT CALCULATION FOR MODELS WITH AN ADDITIVE UTILITY FUNCTION
We will first introduce two building blocks to help calculate the gradient:
(Subset Probability)
Given the retrieval corpus 𝒟_ret = {d_1, ..., d_M} and the weights W = {w_1, ..., w_M}, subset probability P_k(a, b) is the sum of the probability of subsets with a size of k from 𝒟^' = {d_a, d_a+1,...,d_b}.
P_k(a, b) = ∑_|𝒮| = k, 𝒮⊆𝒟^'∏_d_i ∈𝒮 w_i
∏_d_i ∉𝒮 (1 - w_i)
We only use the subset probability values P_·(1, ·) and P_·(·, M). We compute these subset probability values within 𝒪(MK) time complexity, leveraging previous work on efficiently computing Poisson-binomial distribution values <cit.>.
(Boundary Value Probability)
Given the validation tuple x_val, the retrieval corpus 𝒟_ret = {d_1, ..., d_M}, the weights W = {w_1, ..., w_M} and the possible value set 𝒱 of the utility function, the boundary value probability B_k(i, e) is the sum of the probability of all subsets 𝒮 sampled from 𝒟^' = {d_i, d_i+1,...,d_M} whose k-th element d_α_k(𝒮) is evaluated as e.
B_k(i, e) = ∑_U_x_val(d_α_k(𝒮)) = e, 𝒮⊆𝒟^'∏_d_i ∈𝒮 w_i
∏_d_i ∉𝒮 (1 - w_i)
This term can be calculated via dynamic programming:
B_k(i, e)={[ B_k(i+1, e) * (1 - w_i) +
𝕀{ U_x_val(d_i)=e}
* w_i k=1; B_k(i+1, e) * (1 - w_i) + B_k-1(i+1, e) * w_i k>1 ].
To make Equation (<ref>) correct for every k ∈ [1, K], i ∈ [1, M] and e ∈𝒱, we initialize the boundary value to B_k(M+1, e)=0 for k ∈[1, K], e ∈𝒱. The time complexity of computing the boundary value probability using the above equation is 𝒪(M K V).
With these two building blocks, we are able to calculate the exact value of G(x_val, w_i). We examine two situations.
(1) |𝒮|<K
In this case, the size of sampled retrieval corpus 𝒮 is smaller than K. Therefore, including the d_i in 𝒮 does not expel any from the K-nearest neighbor set. Thus,
U_x_val(𝒮∪{d_i})
- U_x_val(𝒮) =
U_x_val(d_i)/K
The sum of the probability of the respective subsets 𝒮 equals the probability of selecting subsets with sizes less than K from { d_1,...,d_i-1,d_i+1,...,d_M }. The gradient for these sampled subsets 𝒮 can be written as:
G_1(x_val, w_i)
= ∑_|S| <K,𝒮⊆𝒟_ret\ d_iU_x_val(d_i)/K· P[𝒮]
= U_x_val(d_i)/K·∑_k^' = 0^K-1∑_j = 0^k^' P_j(1, i-1)· P_k^'-j(i+1, M)
The time complexity of computing all G_1(x_val, w_i) using the above equation is 𝒪(M K^2).
(2) |𝒮| ≥ K
In this scenario, adding d_i to the sampled corpus 𝒮 expels d_α_K(S) from the K-nearest neighbor set. The corresponding difference in the utility function is:
U_x_val(𝒮∪{d_i})
- U_x_val(𝒮) =
U_x_val(d_i) - U_x_val(d_α_K(S))/K
The gradient contributed by the corresponding sampled subsets can be calculated by enumerating the that would be expelled from the K-nearest neighbor set. Suppose d_k^' is the one to be expelled, then the sum of the probability of the corresponding subsets 𝒮 equals selecting K data points from { d_1,...,d_i-1,d_i+1,...,d_k^'}. Therefore, the sum of the gradient can be written as:
G_2(x_val, w_i)
= ∑_|S| ≥ K,𝒮⊆𝒟_ret\ d_iU_x_val(d_i) - U_x_val(d_α_K(S))/K· P[𝒮]
= ∑_e∈𝒱U_x_val(d_i) - e/K·∑_j = 0^K-1 P_j(1, i-1)· B_K-j(i+1, e)
The time complexity of computing all G_2(x_val, w_i) using the above equation is 𝒪(M K V).
The exact gradient values G(x_val, w_i) can be computed by the sum of G_1(x_val, w_i) and G_2(x_val, w_i). The detailed algorithm is shown in Algorithm <ref>. The overall time complexity of the algorithm is 𝒪(N·(M logM + M K^2 + M K V ))
§ Ε-APPROXIMATION ALGORITHM FOR CALCULATING EXACT GRADIENT VALUE
The overall time complexity for computing gradients for models with an additive utility function is 𝒪(N·(M logM + M K^2 + M K V )).
In this section, we show that if we are allowed to do ϵ-approximations, we can significantly speed up the calculation of the gradients.
If we calculate the ϵ-approximation Ĝ(x_val, w_i) for the each G(x_val, w_i), we can get the ϵ-approximation for ∂Ũ/∂ w_i as the average of Ĝ(x_val, w_i).
See <Ref>.
Our next step is to detail how to compute the ϵ-approximation for G(x_val, w_i). The variable ϕ_𝒮, x_val( d_i ) = (U_x_val(𝒮∪{d_i}) - U_x_val(𝒮)) equals zero if d_i is not in the K-nearest neighbor set of 𝒮∪{d_i}. This is due to the fact that adding the d_i to the corpus will not change the retrieved by the model. Assuming that the utility function value is within the range of [0, 1], <Ref> can be written as:
| G(x_val, w_i) |
= | ∑_𝒮⊆𝒟_ret\ d_i𝕀{ d_i∈top_K(𝒮∪{d_i})}·ϕ_𝒮, x_val( d_i )_∈ [-1, 1]· P(𝒮) |
≤∑_𝒮⊆𝒟_ret\ d_i𝕀{ d_i ∈top_K(𝒮∪{d_i})}·
P[𝒮]
From Equation <ref> we can see that the absolute value G(x_val, w_i) is bounded by the sum of the probabilities of the d_i in the K-nearest neighbor set of x_val. The probability of d_i to be in the K-nearest neighbor set equals the probability of less than (K-1) points with higher ranks appearing in 𝒮. The latter can be modeled by a Poisson-binomial distribution. Suppose that the retrieval corpus {d_1, d_2, ⋯, d_M} is ranked with respect to x_val, then the gradient can be bounded via the Chernoff bound for μ_i > K-1, where μ_i=∑_k=0^i-1w_k:
| G(x_val, w_i) | ≤ P[ PB(w_1, w_2, ⋯, w_i-1) ≤ K-1]
≤exp(-(μ_i-K+1)^2/2μ_i)
Notice that for a with a lower rank, the probability of it being in the K-nearest neighbor set is smaller. Therefore we can define the boundary point d_b of the retrieval corpus.
(Boundary Point)
Given a validation tuple x_val and the retrieval corpus 𝒟_ret = {d_1, ⋯, d_M} ranked with respect to x_val, the boundary point d_b is the with the highest rank in the sorted corpus such that any that has a lower rank than d_b has a probability less than ϵ to be in the K-nearest neighbor set of x_val.
In practice, after we rank the corpus with respect to a validation tuple, we can use binary search to find the boundary point.
min_b[(exp(-(μ_b-K+1)^2/2μ_b) < ϵ) ( μ_b > K-1 ) ]
After we find this boundary point d_b, we can use 0 as the ϵ-approximation for the gradient for s with a lower rank. Ĝ(x_val, w_i) = 0 for i ∈{b, ..., M}, because the probability of those s being the K-nearest neighbor is less than ϵ. In the following, we detail the approximation for s with a higher rank.
Given the validation tuple x_val, the retrieval corpus 𝒟_ret = {d_1, ..., d_M}, the boundary point d_b, and the weights W = {w_1, ..., w_M}, if we have an algorithm 𝒜 to calculate the G(x_val, w_i) = 𝒜(x_val,𝒟_ret, W ), then Ĝ(x_val, w_i) = 𝒜(x_val,{d_1, ..., d_b},{w_1, ..., w_b}) is the ϵ-approximation for G(x_val, w_i).
See <Ref>
From <Ref>, we can compute the ϵ-approximation for every by discarding the outlier points {d_b, d_b+1, ..., d_M}. This reduces the time complexity from 𝒪(N·(M logM + M K^2 + M K V )) to 𝒪(N·(B logB + B K^2 + B K V )) where B is the rank of the boundary point.
If the value of all w_i is greater than a certain constant λ, then the index of the boundary point B is 𝒪(K).
See <Ref>
The above theorem shows that if all weights W are greater than a certain constant, the scale of B is only related to K instead of the size of the retrieval corpus M. It means even though we may have millions of in the retrieval corpus, we only have to consider O(K) passages with the highest rank to the validation tuple. The overall time complexity for computing the approximate gradients for weights of models with additive utility functions is 𝒪(N·(K logK + K K V )) = 𝒪(N· K^2 · V). This significantly speeds up their computation.
§ EXACT GRADIENTS FOR A GROUPED RETRIEVAL CORPUS
In this section, we will compute the exact gradient of the weights for the K-Nearest Neighbor classifier assuming that the retrieval corpus is generated from multiple data sources. We can see from <Ref> that the key component of computing the accurate gradient value is computing G(x_val, w_i). Inspired from previous work <cit.>. We can simplify the equation as follow:
G(x_val, w_i)
= ∑_t, t^'∈𝒟_ret·∑_γ, γ^'∈Γ· u_Δ(γ, γ^') ·ω_t, t^'( γ, γ^', i, x_val)
The idea of Equation (<ref>) is to enumerate which t in the generated retrieval corpus is the kth nearest neighbor α_K(f_source(𝒮)) of a sampled subset f_source(𝒮). Added the s from the source f_source(o_i) to the retrieval corpus may expel more than one from the original K-nearest neighbor set. Therefore, we also enumerate which new t^' is the α_K(f_source(𝒮∪{o_i})).
The tally_v,t operator returns the number of data points with a similarity score greater than t with the utility function value v. The tally_t𝒮 = ( tally_v_1,t𝒮, ⋯, tally_v_|𝒱|,t𝒮) returns a tally vector γ∈Γ⊂ℕ^|𝒱| consisting of tailed occurrences of each possible utility function value v ∈𝒱 of α_K(f_source(𝒮)). Let Γ be the set of all possible tally vectors. Enumerating the label tally vectors allows us to easily calculate the difference in utility function value after adding the data source o_i to the retrieval corpus by
u_Δ(γ, γ^') = ∑_v∈𝒱γ_v · v/K - ∑_v∈𝒱γ^'_v · v/K
Inspired by <cit.>, we associate a binary variable a_i ∈𝒜 to every data source o_i to represent the sampled dataset. We can define value assignments z: 𝒜→𝔹 to determine whether a data source is in the sampled dataset. By setting z(a_i) = 0, we expel o_i from the sampled data sources for the retrieval corpus. By setting z(a_i) = 1, we include o_i in the sampled data sources for the retrieval corpus. Let 𝒵_𝒜 be all possible value assignments. We can change counting the possibility of sampled datasets to counting the possibility of value assignments. The ω_t, t^'( γ, γ^', i, x_val) is defined below:
ω_t, t^'( γ, γ^', i, x_val) =
∑_z ∈𝓏_A \{a_i}∏_z(a_j) ≠ 0 w_j
∏_z(a_j) = 0 (1 - w_j)_P(v)
·𝕀{t=α_K (𝒟_ret[z[a_i ← 0]])}·𝕀{t^'=α_K (𝒟_ret[z[a_i ← 1]])}
·𝕀{γ=tally_t ( 𝒟_ret[z[a_i ← 0]])}·𝕀{γ^'=tally_t^'( 𝒟_ret[z[a_i ← 1]])}
We define the evaluation of data sources similarly to <cit.>:
eval_z(j):= (0, 0), if j=M+1,
(0, 0)+eval_z(j+1) if z(a_j)=0,
(tally_t(o_j), tally_t^'(o_j))+eval_z(j+1) if z(a_j)=1 .
To count the sum of the probability of valid value assignments, we define the count function as:
count_e(j):=∑_z ∈{z ∈𝓏_A|eval_z(j)=e}∏_z(a_i) = 1, i≥ j w_i
∏_z(a_i) = 0, i≥ j (1 - w_i)
count_e(j) can be computed by a dynamic programming algorithm as:
count_e(j) = count_e(j+1) * (1 - w_j)+ count _e- (tally_t(o_j), tally_t^'(o_j))(j+1) * w_j
We initialize the value with count_0(M+1) = 1 and count_e(M+1) = 0, where e in Γ×Γ. Then we can compute the value as follows:
ω_t, t^'(γ, γ^',i, x_val)=count_(γ, γ^')(1)
If we assume the parameter K and the possible values of the label tally vector are constants, the time complexity of the algorithm is 𝒪(N · T^2· M^2), where T is the size of the generated retrieval corpus.
§ (Ε, Δ)-APPROXIMATION ALGORITHM FOR GENERAL UTILITY FUNCTION
In this section, we provide a general solution for efficiently approximating gradients with a general utility function. In practice, we can use the Monte Carlo Method to approximate the gradient. Based on Equation <ref>, we can adapt the Monte Carlo method to get an (ϵ, δ)-approximation for each G(x_val, w_i). For each validation tuple x_val, we randomly sample a retrieval corpus 𝒮 from the 𝒟_ret\ d_i to compute the estimation for G(x_val, w_i).
If we can calculate the (ϵ, δ)-approximation for the each G(x_val, w_i), we can get the (ϵ, δ)-approximation for ∂Ũ/∂ w_i.
To get the (ϵ, δ)-approximation for ∂Ũ/∂ w_j, we set the ϵ^' = ϵ, δ^' = δ/|𝒟_ret|. Suppose we can get the (ϵ^', δ^')-approximation Ĝ(x_val, w_i) for the each G(x_val, w_i), then we calculate ĝ_i as
ĝ_i = 1/|𝒟_val|·∑_x_val∈𝒟_valĜ(x_val, w_i)
and ĝ_i is the (ϵ, δ)-approximation for ∂Ũ/∂ w_i. Each of the |𝒟_ret| steps of the algorithm has at most a δ^' chance of failure. The union bound then bounds the total chance of failure by δ^'· |𝒟_ret| = δ. Analogous to <Ref>, the difference between ĝ_i and ∂Ũ/∂ w_i can be bound by ϵ. Therefore, we have obtained the (ϵ, δ)-approximation for G(x_val, w_i).
However, the naive implementation of the approximation algorithm is time-consuming. We want to do fewer estimation steps without sacrificing accuracy. We can describe the improved algorithm of computing G(x_val, w_i) as follows:
* Initialization
Given a validation tuple x_val and the retrieval corpus 𝒟_ret, we first rank the data points with respect to the validation tuple.
* Filtering the outlier points
We use a binary search to find the boundary point. Then we discard data points d_i which have a lower rank than the boundary point by setting the G(x_val, w_i) as 0.
* Monte Carlo steps
Finally, we use the Monte Carlo method to approximate the value G(x_val, w_i) for all the remaining points.
So far, we have finished the (ϵ,δ)-approximation for all gradient values G(x_val, w_i). For data points d_i which has a lower rank than the boundary point, the approximate value equals 0 because the G(x_val, w_i) is bounded by ϵ. For data points which has a higher rank than the boundary point, the (ϵ,δ)-approximation is guaranteed by the Monte Carlo method. The pseudocode for the algorithm is Algorithm <ref>.
The time complexity for computing the approximated gradient values is 𝒪(NMlogM + NMTC), where N is the size of the validation set, M is the size of the retrieval corpus, T is the number of experiments conducted by the Monte Carlo Method and C is the time complexity of each utility function evaluation. The time complexity of the improved algorithm is 𝒪(NMlogM + NBTC). B is the index of the boundary point. With <Ref>, we can see that if the value of all w_i is larger than a certain constant λ, the overall time complexity is 𝒪(NMlogM + NKTC).
§ PROOFS AND DETAILS
§.§ Details of <Ref>
Suppose we can get the ϵ-approximation Ĝ(x_val, w_i) for the each G(x_val, w_i), then we will calculate ĝ_i as the approximation for ∂Ũ/∂ w_i:
ĝ_i = 1/|𝒟_val|·∑_x_val∈𝒟_valĜ(x_val, w_i)
The difference between ĝ_i and ∂Ũ/∂ w_i can be bound by:
|∂Ũ/∂ w_i - ĝ_i| = 1/|𝒟_val|·|∑_x_val∈𝒟_val
G(x_val, w_i) - ∑_x_val∈𝒟_valĜ(x_val, w_i)|
≤1/|𝒟_val|·∑_x_val∈𝒟_val|
G(x_val, w_i) -
Ĝ(x_val, w_i)|≤1/|𝒟_val|· |𝒟_val| ·ϵ = ϵ
§.§ Details of <Ref>
Suppose 𝒟_ret^' = {d_b+1, d_b+2, ..., d_M} and 𝒲^' = {w_b+1, w_b+2, ..., w_M}, we have:
|Ĝ(x_val, w_i) - G(x_val, w_i)| = |𝒜(x_val,𝒟_ret\𝒟_ret^',𝒲\𝒲^') - 𝒜(x_val,𝒟_ret, W )|
= | ∑_𝒮⊆𝒟_ret\(𝒟_ret^'∪{d_i})ϕ_𝒮, x_val( d_i )
· P[𝒮] - ∑_𝒮⊆𝒟_ret\ d_iϕ_𝒮, x_val( d_i )
· P[𝒮] |
≤| ∑_𝒮⊆𝒟_ret\ d_i𝕀{𝒟_ret^'∩top_K(𝒮 ) ≠∅}· P[𝒮] | ≤ P[ PB(w_1, w_2, ⋯, w_b) ≤ K-1] ≤ϵ
§.§ Details of <Ref>
B is the minimum b such that exp(-(μ_b-K+1)^2/2μ_b) < ϵ and μ_b > K-1. If the value of all w_i is greater than a certain constant λ, suppose b > 4/λlog1/ϵ + 2K - 2/λ, we have:
μ_b > b·λ > 2K - 2/λ·λ>2(K-1)
and
exp(-(μ_b-K+1)^2/2μ_b) < exp(-(μ_b-K+1)^2/4(μ_b-K+1))
< exp(-μ_b-K+1/4) < exp(-4log1/ϵ/4) = ϵ
Therefore, B is 𝒪(4/λlog1/ϵ + 2K - 2/λ). If we treat λ and ϵ as contacts, B is 𝒪(K).
§ FULL RESULTS OF ACCURACY ADDED EXTERNAL RETRIEVAL SOURCE
§ FULL RESULTS OF WEIGHT-BASED REWEIGHTING AND PRUNING
§ FULL RESULTS ON DIRTY CORPUS
§ FULL RESULTS ON ADDITIONAL FABRICATED DATA
§ FULL RESULTS ON OPENAI GENERATOR
|
http://arxiv.org/abs/2307.01842v1
|
20230704174456
|
Universality in the tripartite information after global quenches: spin flip and semilocal charges
|
[
"Vanja Marić"
] |
cond-mat.stat-mech
|
[
"cond-mat.stat-mech",
"hep-th",
"quant-ph"
] |
shapes.geometric
|
http://arxiv.org/abs/2307.00383v1
|
20230701165032
|
Enhancing Dexterity in Robotic Manipulation via Hierarchical Contact Exploration
|
[
"Xianyi Cheng",
"Sarvesh Patil",
"Zeynep Temel",
"Oliver Kroemer",
"Matthew T. Mason"
] |
cs.RO
|
[
"cs.RO"
] |
Simplifying the large mass expansion
[
August 1, 2023
====================================
We present a hierarchical planning framework for dexterous robotic manipulation (HiDex). This framework exploits in-hand and extrinsic dexterity by actively exploring contacts. It generates rigid-body motions and complex contact sequences.
Our framework is based on Monte-Carlo Tree Search (MCTS) and has three levels:
1) planning object motions and environment contact modes;
2) planning robot contacts;
3) path evaluation and control optimization that passes the rewards to the upper levels.
This framework offers two main advantages.
First, it allows efficient global reasoning over high-dimensional complex space created by contacts.
It solves a diverse set of manipulation tasks that require dexterity, both intrinsic (using the fingers) and extrinsic (also using the environment), mostly in seconds.
Second, our framework allows the incorporation of expert knowledge and customizable setups in task mechanics and models. It requires minor modifications to accommodate different scenarios and robots. Hence, it could provide a flexible and generalizable solution for various manipulation tasks.
As examples, we analyze the results on 7 hand configurations and 15 scenarios. We demonstrate 8 of them on two robot platforms.
§ INTRODUCTION
Robots need dexterity to perform daily manipulation and complex industrial tasks.
Consider taking a book from the bookshelf. The robot should consider the occlusion of the bookshelf and other books, even use them, to get the book out. The robot need to not only use it own fingers dexterously, but also be smart about exploiting its environment, as “external” fingers to support the movements of the object.
The automatic planning of intrinsic and extrinsic dexterity for general manipulation tasks still remains challenging.
First, as contacts introduce discontinuity and changes in system dynamics, planning through contacts is particularly difficult <cit.>, especially considering both robot and environment contacts for the object. Second, due to the diverse nature of manipulation, it is hard to predefine all the possibilities for dexterity as motion primitives. It is important for the robot to be able to discover dexterous motions by its own. Third, current manipulation planners or policies are often tailored for specific problems. A complex in-hand manipulation pipeline <cit.> cannot directly solve on-table object reorientation problems in <cit.>, and neither of them can be directly applied to planar pushing <cit.>. As real-world manipulation problems are often mixes of specific manipulation tasks, it is important for a general manipulation planner to cover different tasks.
We propose a hierarchical framework, as shown in Figure <ref>, aiming to address challenges mentioned above.
We take an object-centric view to represent the object contact interactions with the environment and the robot. To effectively explore the complex space created by contacts, we exploit a hierarchical structure combined with MCTS<cit.>. In Level 1, we perform object trajectory and environment contact mode planning. Environment contact information (contact modes) are used to guide the generation of object motions — which we consider as the active exploration of extrinsic dexterity. In Level 2, given an object trajectory, the intrinsic dexterity is planned by optimizing for robot contact sequences on the object surface. In Level 3, more details and optimization of the plans are computed and rewards are backpropagated. MCTS used in Level 1 and 2 allows us to encode expert knowledge and information gathered during the search process as heuristics, guiding the search directions and balancing exploitation and exploration. In addition, we employ Rapidly-exploring random tree (RRT) <cit.> as the MCTS rollout to enhance the exploration of object configuration space in Level 1.
Our design works with different task mechanics, robot hand configurations, object and environment models. It is easy to configure many new scenarios with simply one file in our code. It is flexible to encode expert knowledge into the search through MCTS action policies, value estimations, and rewards. We instantiate this framework on manipulation with extrinsic dexterity and in-hand manipulation. Demonstrated tasks include pick up a card, book-out-of-bookshelf, peg-out-of-hole, block flipping, occluded grasp, upward peg-in-hole, sideway peg-in-hole, planar reorientation, planar block passing, and in-hand reorientation. As discussed in Section <ref>, we envision this framework can be extended towards general manipulation planning that incorporates global reasoning, mechanics, learning, and optimization.
§ RELATED WORK
§.§ Dexterous Manipulation Planning
Most works in manipulation focus on individual skills, like pushing <cit.><cit.>, pivoting <cit.>, tumbling <cit.>, grasping <cit.><cit.>, on-table reorientation <cit.>, and predefined dynamic skills <cit.>.
While the mechanics and planning of specific motion types are studied in depth, generating dexterous manipulation planning is still under-explored.
What are the essential challenges about dexterous manipulation planning?
The presence of potential contact changes introduce discontinuity and changes in system dynamics (non-differentiable). This leaves us a high dimensional, non-convex, discrete and continuous space to plan through.
Contact-implicit trajectory optimization (CITO) <cit.> directly solves nonlinear programming (NLP) problems in the complex hybrid space. Most methods simplify the problems to make them tractable. Simplifications include simple primitive shape representation<cit.>, reducing to 2D, and small number of contact transitions <cit.>. Moreover, CITO can be slow and intractable without good initialization.
It is worth to explore the space through global search. To capture the discreteness of manipulation systems, previous research searches through predefined manipulation modes and solve for each mode sequence the whole trajectory using an NLP <cit.>.
For dexterous manipulation, manually defining modes could be engineering heavy or even intractable, as suggested by observations in dexterous grasping <cit.>.
Alternatively, expert knowledge in contacts, like contact formations <cit.>, contact states <cit.> and contact modes <cit.> are exploited to efficiently generate rigid body motions between two bodies <cit.>, within a robot hand <cit.>, dexterous pregrasps <cit.>, under environment contacts <cit.>, and with local trajectory optimization combined with high-level planning <cit.>. Contact modes can help guide automatically generation of motion primitives that are differentiable in dynamics <cit.>.
Built on the idea of exploiting contacts, our work pushes forward towards a more general framework in 3D dexterous manipulation.
We exploit hierarchies inspired by previous hierarchical frameworks for 2D manipulation <cit.>.
While we share a similar idea of decomposing the search in object motions and robot contacts, we design different pipelines, efficient exploration of contacts, and new representations to make it feasible for complex 3D scenarios.
Reinforcement learning (RL) is efficient in discovering manipulation skills from direct interactions with the environment, like in-hand manipulation <cit.>, dexterous grasping <cit.>, and multi-step object reorientation <cit.>. RL faces the same challenges from high-dimensional complex spaces, leading to sample efficiency problems.
RL also requires significant training for new tasks, while our planners can be directly used for a new object or environment.
For future research, we hope to combine our framework with RL to leverage the strength of both.
§.§ Monte-Carlo Tree Search
MCTS is a heuristic search algorithm that uses random sampling for efficient exploration. The AlphaGo family of algorithms <cit.> combines MCTS with deep neural networks, achieving a superhuman level of play in board games like Go. MCTS has also shown it effectiveness in robotic applications such as robot task planning <cit.>, task and motion planning <cit.>, and object rearrangement planning <cit.>.
MCTS has also shown potentials to plan in the large combinatoric space for contacts. Previous works include gait planning for legged robots <cit.> and robot contact sequence planning given user-designed object trajectories<cit.>.
Based on these works, our work takes one step further in planning not only robot contacts, but also object interactions with environment contacts (exploiting extrinsic dexterity<cit.>).
The incorporation of MCTS also offers several current and future benefits, including efficient search through vast complex space, potentials for continuous improvement through deep learning and self-exploration <cit.>, and parallelizability to accelerate the search <cit.>.
§ PRELIMINARY: MCTS
Level 1 and 2 use the MCTS skeleton in Algorithm <ref>.
A search tree 𝒯 = (𝒱, ℰ) contains a set of nodes 𝒱 and edges ℰ. A node is associated with a visited state s. An edge is associated with a state transition s s' through action a.
grow-tree iteratively expands the tree following four steps in Figure <ref>: selection, expansion, rollout, and backpropagation.
Selection determines search directions by selecting the next node through a score that balances exploration and exploitation. We employ the idea in AlphaGo <cit.> — use value estimation and action probability to prioritize empirically good directions. Each node maintains a value estimation v_est(s), obtained value v(s), and the number of visits N(s).
For s s', we define the action probability p(s,a), the number of visits N(s,a) = N(s'), and the state-action value Q(s,a) = λ v(s') + (1-λ) v_est(s'), where λ∈ [0,1] is an adaptive parameter that balance portions of the obtained value and the value estimation. To mitigate the inaccuracy of value estimation, λ increases as the search goes on.
Among the set of feasible actions 𝒜(s), we select the next action a^* ←argmin_a ∈𝒜(s) U(s,a) with η controlling the degree of exploration:
U(s,a) = Q(s,a) + η p(s,a) √(N(s))/1 + N(s,a)
In backpropagation, every node on the evaluated path is updated with the reward r:
v(s) = N(s) v(s) + r/N(s) + 1
N(s) = N(s) + 1
A direct value estimation function is often used to calculate v_est. Otherwise if a reward estimation r_est is used, we update v_est with the same rule as Equation <ref>.
§ HIERARCHICAL PLANNING FRAMEWORK
§.§ Task Description
This paper focuses on one rigid body in a non-movable rigid environment or no environment component.
A planner designed under this framework takes in:
* Object properties: a rigid body 𝒪 with known geometry (for example, a mesh model), mass distribution (center of mass and inertia matrix), and friction coefficients with environment μ_env and with the manipulator μ_mnp.
* Environment with known geometries.
* Robot model: The robot manipulates the object through N_mnp predefined fingertip contacts. The collision models, forward and inverse kinematics, finger contact models are known. We assume the robot makes non-sliding and non-rolling contacts.
* Start specification: object start pose x_start∈ SE(3).
* Goal specification: object goal region X_goal⊂ SE(3).
It outputs an object configuration trajectory x(t) and a robot control trajectory u(t).
§.§ Level 1: Planning Environment Contact Modes and Object Trajectories
Level 1 is summarized in Algorithm <ref>, and visualized in Figure <ref> with a block reorientation example.
It plans trajectories of environment contact modes and object configurations, which are then passed to Level 2 for planning robot contacts and further evaluated in Level 3.
In the Level 1 selection phase, we interleave the search over discrete environment contact modes and continuous object configurations. When a node is selected for expansion and rollout, an RRT method replaces random rollouts to improve exploration efficiency. The output path from the RRT rollout is added to the MCTS, and is passed to Levels 2 and 3 for reward evaluation.
§.§.§ Selection — Interleaved Search Over Discrete and Continuous space
The object configuration space SE(3) is continuous, however, the presence of contacts partition it into a complex space. It has many lower-dimensional subspaces with zero probability to be randomly sampled on.
For instance, in Figure <ref>, all object poses for a cuboid pivoting on an edge (mode 0011) lie in a 1D space in the 6D object configuration space. The 1D space has zero probability to be randomly sampled on.
We use contact modes for efficient guidance in planning motions in low-dimensional manifolds.
A contact mode describes the relative motion of each contact in the system, which is either “maintain” (0) or “separate” (1).
We define Level 1 state as s1 = (x∈SE(3), nodetype∈{mode, pose}), where x is an object pose and nodetype stores the type of the node.
The interleaved selection process is demonstrated in line 6-25 in Algorithm <ref> and in Figure <ref>.
For a pose node, the action is to select a contact mode for it. The feasible actions are 𝒜(s1 = (x, pose)) = ℳ(x), where ℳ(x) is the set of kinematically feasible contact modes enumerated for an object pose x using the algorithm in <cit.>.
The result from assigning a contact mode to a pose node is a mode node. For a mode node, the action is to select the next object pose moving from the current object pose following the contact mode. The available actions 𝒜(s1 = (x, mode)) comprise choosing from its child nodes (explored object poses) or explore-new, which triggers the expansion and rollout phases to explore new object poses.
The selection policy follows Equation <ref>. Action probabilities p(s,a) reflect preferences of modes or poses. For example, if we prefer to exploit environment constraints to reduce uncertainties as in <cit.>), we could design probability functions that prioritize modes that maintain contacts.
§.§.§ Expansion
The expansion phase equals to explore-new being selected for a mode node. It is a variant of the progressive widening technique in MCTS for continuous space <cit.>, where we control the expansion rate with the action probability of explore-new.
§.§.§ RRT as Rollout
In a traditional MCTS, a new node is added by sampling an unexplored action and then evaluated by sampling random rollouts to the end. As our search space has a continuous part and is high-dimensional with sparse solutions from lower-dimensional manifolds, it is unlikely to get any useful results from a random trajectory rollout.
We replace the random rollout with an RRT search guided by contact modes (line 17, Algorithm <ref>), modified based on <cit.>. According to <cit.>, “an RRT can be intuitively considered as a Monte-Carlo way of biasing search”. Our treatment fits the Monte-Carlo philosophy while improved with better guidance towards the goal.
Here is a brief description of the RRT. Details can be found in <ref>apx:rrt.
The mode node to expand provides the RRT with the current object pose x_current and selected contact mode m_selected. The RRT tries to reach x_goal. It outputs a trajectory where each point is an object pose associated with a contact mode.
In each iteration, we first sample an object pose x_extend∈SE(3) and find its nearest neighbor x_near. We then extend x_near towards x_extend. Each extension is under the guidance of a contact mode. If x_near is x_current, the contact mode should be m_selected. Otherwise, we enumerate all environment contact modes and filter them using feasibility checks.
New object poses are generated by forward integration that follows each feasible contact mode as close as possible to x_extend.
If the RRT finds a solution to x_goal within the maximum number of iterations, we add the solution path after the expansion node in the Level 1 search tree and proceed to Level 2 (line 19, Algorithm <ref>) to obtain a reward. Otherwise, this process backpropagates zero reward and no new node is added.
The RRT is reused throughout the entire lifespan of the MCTS.
Compared to <cit.>, we can turn on the option to relax the feasibility check for a contact mode to be “exist any feasible robot contact” while the previous work maintain robot contacts in its states, searching in a more complex space of SE(3) ×ℝ^N_mnp. This relaxation could improve the planning speed for some tasks as discussed in Section <ref>.
§.§ Level 2: Planning Robot Contacts
Level 2 initiates in evaluate-reward in Level 1 (line 19 and 27, Algorithm <ref>). Level 2 takes in the object trajectory, and output the best robot contact sequence. The best reward will be passed back to Level 1. Algorithm <ref> summarizes the grow-tree process in Level 2 MCTS.
§.§.§ State and Action Representation
In Level 2 search tree, each node is associated with a robot contact state s2 = (t, {(i, p_i) | i ∈active fingers at t}).
This represents that the robot makes contacts by specified active fingers at contact locations {p_i ∈ℝ^3 } on the object surface at timestep t.
For example, grasping an object at timestep 0 with the first and third finger at locations (1,0,0) and (-1,0,0) can be written as (0, (1, (1,0,0)), (3, (-1,0,0))).
Each action a = (t_c, {(j, p_j) | j ∈relocating fingers at t_c}) represents robot contact relocations, specified by relocating timestep t_c, relocating fingers, and the contact points they are relocating to {p_j}. Continuing the last example, if we choose to maintain the grasp until timestep 4, and then move the third finger to (0,0,1) and add the fifth finger at (-1,0,0.5), the action is written as (4, (3, (0,0,1)), (5, (-1,0,0.5))). The resulting new state is (4, (1, (1,0,0)), (3, (0,0,1)), (5, (-1,0,0.5))).
Compared to the common practice of planning contacts for every timestep <cit.>, we plans for contact relocations. While the complexity of the search space does not change, empirically in most tasks, this modification significantly reduces the depth of the search tree and speeds up the discovery of a solution (experiments in Section <ref>, Figure <ref>).
§.§.§ Sampling and Pruning for Action Selection
If we consider 100 object surface contact points, 4 fingers, and a 10-step trajectory. The action space is 100^4*10 = 1,000,000,000. Thus it is not practical to evaluate and compare all actions. To mitigate this issue, we adopt action sampling techniques that are efficient in non-enumerable action space <cit.>.
In Equation <ref>,
we consider a subset of all available actions 𝒜_sp(s2) ⊂𝒜(s2). 𝒜_sp(s2) includes all previously explored actions and newly sampled actions. The newly sampled actions are generated by first sampling the relocating timestep t_c, and then which robot contact(s) to relocate to what location(s) on the object surface.
The relocating timestep t_c is sampled in (t, t_max], where t_max is the maximum timestep the current set of contacts can proceed to under the feasibility check in Section <ref>. After t_c is sampled, we sample a robot contact relocation through rejection sampling. We first find relocatable robot contacts by checking if the fingers left satisfy the relocation conditions. Then we sample a feasible new contact location for each relocating finger on the object surface.
If the selected action is an explored action, we are performing the selection phase. If the selected action is a new action, we are performing the expansion and rollout phase. We mix the selection, expansion, and rollout using the same heuristic function. However, one can also use different rollout policies or use value estimation to completely replace the rollout.
§.§.§ Feasibility Check
To prune fruitless search directions, we enforce Level 2 nodes and Level 1 RRT rollout nodes to pass the feasibility check, including:
* Kinematic feasibility: whether there exist inverse kinematics solutions for the robot contact points
* Collision free: whether the robot links are collision-free with the environment and the object
* Relocation feasibility: whether there exists a plan to relocate from previous robot contacts to new contacts
* Force conditions: whether the chosen contact points can fulfill the task dynamics, for example, we use a quasistatic or quasidynamic model for our test tasks with environment interactions, and force balance or force closure conditions for in-hand manipulation.
* Other task-specific requirements may also be added.
§.§ Level 3: Path Evaluation and Control Optimization
Level 2 passes a full path including object motions and robot contact sequence. Level 3, called in line 9, Algorithm <ref>, checks the feasibility and computes the robot controls u(t). Level 3 can take different methods as long as it provides the reward and robot controls.
For example, if the task mechanics are quasi-static or force-closure,
as the timing does not matter on the optimization side anymore, “timestep” becomes “step”. For each step t, we could individually solve to check whether quasi-static or force-closure solutions exist and output the robot positions and optimal contact forces as the control u(t), which can potentially be executed using hybrid force velocity control methods <cit.>.
If full dynamics is required, we could potentially use the path to be evaluated as a warm-start for trajectory optimization methods <cit.> to find the control outputs and locally improve the trajectory.
For the resultant control trajectory u(t), we compute the reward r and the estimations v_est or r_est.
There are two rules for defining a reward function: 1) A feasible path should have a positive reward. A non-feasible path should have a zero or negative reward. It is preferred that the reward is in [0,1]. 2) There should be a term that regularizes the length of the path. Otherwise, the search might never end.
§ EXAMPLES AND EXPERIMENTS
We implemented two task types: manipulation with extrinsic dexterity and in-hand manipulation.
In our code, setting up new scenarios and adjusting search parameters only require modification of one https://github.com/XianyiCheng/HiDex/blob/main/data/template_task/setup.yamlsetup.yaml file.
§.§ Implementation
We use Dart <cit.> as the visualization tool and Bullet <cit.> for collision detection.
We include detailed setup requirements in <ref>apx:new and more implementation details in <ref>apx:exp.
§.§.§ Robot Model
We implemented two robot types.
Ball fingertips: Each fingertip is a sphere with workspace limits as kinematic feasibility check. We check for collision of the sphere and the environment.
We use three vertices of an equilateral triangle on the sphere perpendicular to the contact normal to approximate a patch contact.
Dexterous Direct-drive Hand (DDHand): A DDHand has two fingers. Each fingertip has two degrees of freedom for planar translation and is equipped with a horizontal rod <cit.>.
We provide analytical inverse kinematics model and use line contact model as the fingertip contact model.
§.§.§ Task Mechanics
We implemented quasi-static, quasi-dynamic and force closure models. For each timestep, we solve a convex optimization problem to find if contact force solutions exist (details in <ref>apx:exp:env:mechanics).
§.§.§ Feasibility Checks
include task mechanics check, finger relocation force check (during relocation, it needs to satisfy the task mechanics assuming the object is static), kinematic feasibility check, and collision check.
§.§.§ Features and Rewards
We use features including travel distance ratio (total object travel distance divided by the start to goal distance), path size, robot contact change ratio (number of finger contact changes divided by the path length), and grasp centroid distance <cit.>.
Given some feature values as data points, we manually label the reward values, favoring smaller object traveling distance, less number of contact changes, and better grasp measures. Given the labeled data, we fit a logistic function as the reward function.
§.§.§ Action Probability
In Level 1, in choosing the next contact mode, the action probability prioritizes the same contact mode as before:
p(s1 = (x, mode),a ) =
0.5 if a = previous mode
0.5/# modes - 1 else
In choosing the next configuration, we use a uniform distribution among all the children plus the expansion action.
In Level 2, in choosing a timestep to relocate and the contact points to relocate to,
the action probability is calculated using a weight function w(s2, a)
p(s2, a) = w(s2, a)/∑_a' ∈𝒜_sp(s2) w(s2, a')
w(s2, a) encourages previous robot contacts to stay until t_max:
w(s2, a) =
0.5 + 0.5/t_max - t_c + 1 if t_c = t_max
0.5/t_max - t_c + 1 else
§.§.§ Value Estimation
We only use value estimation in Level 1. Each node has v_est = 0.1 if any Level 2 search can find a valid robot contact sequence for it.
§.§.§ Search Parameters
We let η = 0.1 for both levels. We set λ to 1 if a positive reward is found, otherwise 0.
§.§ Manipulation with Extrinsic Dexterity
We evaluate four examples in Figure <ref>. Each scenario is implemented with ball fingertip model without workspace limit and quasi-static mechanics. Additional scenarios are demonstrated in the real robot experiments in Section <ref>.
Table <ref> shows the planning statistics from 100 runs for each scenario, using a desktop with the Intel Core i9-10900K 3.70GHz CPU (also for all other statistics in this paper). As our algorithm is anytime
, we let the planner stop after 10 seconds and collect the results.
§.§.§ Ablation of Hierarchical Structure and MCTS
We compare the results with CMGMP <cit.>, which uses a single RRT in searching for object motions and robot contacts.
For all scenarios, our method finds solutions faster. Since the MCTS takes effect after a solution is found, the main contribution in speed is from the hierarchical structure, which decouples the search space of object poses and robot contacts and enable faster solution discovery.
Compared to CMGMP, our method also finds solutions with smaller travel distances, less finger relocation, and smaller grasp centroid distance, as guided by the MCTS rewards.
§.§.§ Efficient Robot Contact Planning
Level 2 improves robot contact planning by planning contact relocations, while the common practice plans contacts for each timestep on the trajectory <cit.> (w/o relocation selection).
In this ablation study,
we consider robot contact planning on a straight line cube sliding trajectory with one allowable robot contact. We run the planning under different numbers of total timesteps and candidate object surface points.
As shown in <ref>, the search space size grows exponentially for both methods. The planning time of the common practice also grows exponentially. As comparison, our modification uses drastically smaller time. Our assumption is that this modification aligns better with the nature of manipulation — contact relocations are sparse compared to the timesteps of the entire trajectory.
ours.csv
x, y, type
1, 0.02, 0.02
2, 0.07, 0.07
3, 0.13, 0.13
4, 0.14, 0.14
baseline.csv
x, y, type
1, 0.16, 0.16
2, 1.2, 1.2
3, 9.5, 9.5
4, 89.5, 89.5
§.§ In-hand Manipulation
A robot can use its fingers to shift and reorient an object to a desired pose within the hand. As the workspace of fingertips is often limited to very small ranges, complex motions are needed. In-hand manipulation demonstrate the effective use of intrinsic dexterity of the robot itself.
§.§.§ Different Hand Configurations
We test on three hand setups and object models from the YCB dataset <cit.>.
Here we do not consider the collision of finger links. If needed, robot link models should be provided. For inherently safer motions, we require every motion to have force balance or force closure solutions.
Table <ref> shows the statistics for each task with 100 runs of randomized start and goal object poses.
Without any training or tuning, our framework achieves a high planning success rate within seconds. Point sampling on the object surface ensures consistent performance for complex object shapes, demonstrated by the ability to plan contacts inside concave objects, such as the mug and the power drill, as shown in the video.
§.§.§ Add an Auxiliary Goal
While our framework is designed for object poses as goal specifications, here we demonstrate that it is possible to incorporate auxiliary references, like fingertip locations.
We first define d_c, the average robot contact distance to the goal divided by an empirical characteristic length (for example, the object length).
We fit a new reward function that prefers small d_c. We then bias the action probability to sample contact locations that are closer to the goal through w(s2, a) in Equation <ref>:
w(s2, a) =
0.5 + 0.5/t_max - t_c + 1 p_r(d) if t_c = t_max
0.5/t_max - t_c + 1 p_r(d) else
We compare planners with and without additional goal fingertip location for 100 reorientation trials with a hammer and a mug using a 5-finger hand. Each trial has a randomized start pose, goal pose, and goal fingertip locations. As Table <ref> shows, the planner with the additional goal specification results in smaller “Final finger distance”, but more finger relocations are needed. Other features remain less affected.
Due to potential conflicts from the primary goal and trade-offs from other reward terms,
there is no guarantee to achieve good alignment with the auxiliary goal.
§.§ Robot Experiments
We test 8 new scenarios on a dexterous direct drive hand (DDHand) <cit.> and a configurable array of soft delta robots (delta array) <cit.>. For both systems, we perform open-loop execution (no object pose estimation or contact feedback). Given a planned fingertip trajectory, we compute the robot joint trajectory using inverse kinematics and execute it with joint position control. The object start position errors are calibrated within 1mm for the DDHand and 2cm for the delta array.
Figure <ref> and Figure <ref> show the keyframes. The full recordings are in the supplementary video.
§.§.§ DDHand
We show that the planner enables the DDHand to use intrinsic and extrinsic dexterity.
For example, in occluded grasp, the fixed green block and the table prevent a direct grasp. The DDHand uses three steps: pivot the object on the corner; use one finger to hold the object; move the other finger to the other side form a grasp.
In upward peg-in-hole, without a grasp, gravity will cause the peg to fall. But the walls of the hole prevent the fingers from getting in while grasping the object.
The DDHand uses one finger to press the peg against the hole — using the wall as an external finger to grasp. The other robot finger then relocate to push the peg from the bottom. The pressing finger also releases to create space for the peg to be pushed in.
§.§.§ Delta Array
We test on 4 scenarios: planar passing of a cuboid with 2 or 6 separated (no workspace overlap) fingers, planar reorientation on a table with 5 or 6 adjacent (have workspace overlap) fingers. Due to the small workspace of a delta robot (a cylinder with a 2 cm radius and 6cm height), many contact changes are required to accomplish the tasks.
§ DISCUSSION
This paper proposes a hierarchical framework for planning dexterous robotic manipulation.
It facilitates efficient searches across complex spaces, generation of diverse manipulation skills, utilization of expert knowledge, and adaptability for various scenarios.
This method has potentials for the automation of wide-ranging manipulation applications, such as functional grasps, caging, forceful manipulation, and mobile and aerial manipulation.
Our framework design allows future extensions to incorporate trajectory optimization and reinforcement learning.
By adding dynamic trajectory optimization <cit.> in Level 3, we could potentially plan dynamic manipulation and smooth object trajectories.
Incorporating learning methods is also a future direction. Our method obtains diverse skills with simple hand-coded action policies and rewards.
Can past planning experience be leveraged to learn universal contact policies for general manipulation and enhance and be enhanced by reinforcement learning methods?
There are two major limitations when evaluating applicability.
First, fast IK methods for robot fingertips are required, which is not direct for tendon-driven or soft robot hands.
Second, this framework uses fingertips only, meaning that other part of the robot bodies cannot be used for manipulation. Planning finger rolling is also not supported. We plan to incorporate whole-hand manipulation concepts in future developments to fully leverage robot dexterity.
IEEEtran.bst
§ SETTING UP NEW SCENARIOS
In this section, we provide an overview of what are required when setting up new scenarios. Please check our code and Appendix <ref> for the actual implementation.
§.§ Applicability
This framework can be considered for the tasks of manipulating a single rigid body object in a rigid environment. Environment components must be fixed and not movable. It can also be used when there is no environment component (in-hand manipulation). We need known models of the object, the environment, and the robot.
The robot used to manipulate the object needs to have known collision models, and forward and inverse kinematics.
The only parts that can be used to manipulate the object are the defined “fingertips” on the robot.
§.§ Setup a new robot/hand
Setting up a new robot is the most complicated part. Specifically for implementation in our C++ code, a new class need to be written to inherit a pre-defined abstract class RobotTemplate. The user need to fill some specific pure virtual functions that covers the following aspects.
§.§.§ Contact force models for fingertips
We use the point contact model for kinematics. However, as the force model for point contact might be too limited, we allow the use of other contact force models.
The contact force models that currently exist in our implementation includes:
* Point contact
* Patch contact: we first approximate the fingertips using spheres centered at the point contact locations. The radius of the spheres should approximate the radius of the contact patch for each fingertip. We approximate the patch contact using three point contacts at vertices of an equilateral triangle that is perpendicular to the contact normal and on the sphere.
* Line contact: we approximate the line contact model by two point contacts on twp ends of the line segment.
§.§.§ Forward and inverse kinematics for fingertips
The users need to provide the forward and inverse kinematics for the fingertips.
Given the FK and IK model, we precompute the workspace for each fingertip. For general robot hands, we first sample joint angles to get the fingertip points in the workspace through forward kinematics, and then compute the convex hulls. While hands might differ, we estimate this process takes about seconds (with C++ implementation).
§.§.§ Robot collision model
The users need to provide the collision model of the robot or the fingertips. If it is unlikely for the robot links to collide with the object or the environment, it is be okay to only provide the collision shape for the fingertips, which will make the computation much faster.
Otherwise, the user could simply provide a robot URDF model.
§.§.§ Contact relocation planner (optional)
A contact relocation planner is required for checking whether a collision-free path exists for a finger to relocate to another contact location.
§.§.§ Contact sampling on the object surface (optional)
It is best that each fingertip are relatively independent on the kinematic side. If not, our random sampling of robot contacts on the object surface might have a very high rejection rate (> 90%). In this case, we need the user to provide a method for the specific robot in order to more efficiently sample robot contacts on the object surface.
§.§.§ Trajectory optimizer (optional)
For the robots that are under-actuated (like wheeled robots), the users need to provide Level 3 a trajectory optimizer that finds feasible object states, robot states, and robot controls given Level 2 outputs as warm-start trajectories.
§.§ Setup a new task type
After setting up the new robot, we need to enable the robot to do a certain type of tasks. Two major things to consider are task mechanics and task parameters for planning.
§.§.§ Task Mechanics
Task mechanics include the specific requirements and dynamical model required by the task. Do we have to fully exploit the dynamic property of the system? If yes, we need to have a good trajectory optimization algorithm for the manipulation system in Level 3 to ensure the solutions are feasible. If the task dynamics do not involve the integration of velocity and the robot is fully actuated, it is not necessary to provide a trajectory optimization method in Level 3. In these cases, the user only needs to write a function that solves a one-step optimization problem.
Examples include quasi-static, quasi-dynamic, closure methods, planar pushing, etc.
§.§.§ Design choices
A new task type requires several design choices to be made and some search parameters to be tuned. Once the choices are made, changing environments and objects in the same task type should not require more tuning. According to our experience, making designs and tuning parameters are relatively low-effort. We have found that the planner is not sensitive to specific numerical values for parameters.
The design choices include task features, action probability design for Level 1 and 2, reward design, and value estimation design for Level 1 and 2.
Task features are used in the action probability and reward. Basic features are path length in MCTS, number of robot contact relocations, and object travel distance. Task-dependent features like grasp measures or environment contact changes can be added to encourage specific behaviors like better grasps and less environment contact switches. For generality, it is important to normalize the features by ensuring similar values for desired behaviors across different environments and objects.
There are three action probability functions we need to define: (1) select a contact mode in Level 1, (2) select a child (configuration node) for a mode node in Level 1, and (3) select (time to relocate, contacts to relocate to) in Level 2. For (1), we often encourage the use of the same contact mode as the previous one. For (2), we currently mostly use a uniform distribution. For (3), we would like to encourage relocating when the contacts are not feasible the next timestep. However, the definition of these probabilities is entirely up to the user.
To design the reward function, we use a simple approach that requires no tuning.
Given some feature values as data points, we first manually label their reward values between 0 to 1 through human intuition. Next, we fit a logistic function to these data points as the reward function.
Manually value estimation is very flexible. The value estimation in our method is often used to encourage the search to visit a node that has been visited but has not found any positive reward. For example, on the way to the goal pose, if an object pose is reachable through a sequence of contacts (check by Level 2), we can assign 0.1 as its value estimation. Our design principle is to give a small number to any node that is more likely to find a solution than others.
§.§.§ Parameters
The search parameters include MCTS exploration rates η_1, η_2 in Level 1 and 2. Adaptive parameter for value estimation λ.
In all of our experiments, we let η_1 = 0.1, η_2 = 0.1. We let λ = 0 if a positive reward has not been found and otherwise λ = 1. While tuning the parameters may slightly improve the performance for specific tasks, we suspect that most of the time this is not a must. However, λ might need some tuning if one day we have better value estimations, like using learned functions.
§.§ Setup a new environment
If using our preset robots and tasks, the users can easily setup new environments and objects in one file called setup.yaml.
When setting up a model for a new environment, it is usually adequate to use primitive geometries such as cuboids, cylinders, and spheres in the simulation environment.
The users need to specify the shape parameters and the locations of the primitive shapes.
§.§ Setup a new object
For a new object, the users need to provide the object mesh or specify the primitive shape.
Surface points will be automatically uniformly sampled on the mesh.
Each point (p,n) is represented by its location (p ∈ R^3) and its contact normal (n ∈ S^3) in the object frame.
It is usually sufficient to sample about 100 points. The computation is usually in milliseconds.
The user also need to provide the object mass, object inertia, and friction coefficients for robot-object and environment-object contacts.
For each new object and environment, the RRT parameters might need some changes, including the range of object positions, goal biased sampling probability, unit extend length, and the weight for rotation for the distance calculation. The RRT parameters does not require careful tuning, as long as they roughly reflect the task requirement. For example, the weight for rotation is good to be set to 1 if the object bounding box range is between 0.1 - 10 and rotation and translation are roughly of equal importance. If the object orientation is not important at all, the weight is good to be set to 0.01 to 0.1. The unit extend length should be larger if the object start and goal are very far from each other, otherwise the planner will be slow. And it should be smaller if the user expect many different maneuvers required for the task.
Extra note: to avoid numerical issues, we usually scale the whole system such that the average length of the object bounding box is in the range of 1 - 10.
§ EXPERIMENT DETAILS
This section includes the details of the experiments in this paper. The first two are pure planning experiments. The latter two are robot experiments.
§.§ Manipulation with Environment Interactions
§.§.§ Robot model
We consider the robots as free-flying balls, meaning that we do not check for kinematic feasibility but do check for collision of the balls and the environment.
For the contact force model, we use the patch contact model, described in <ref>apx:new:robot.
§.§.§ Task mechanics
We use quasi-static or quasi-dynamic models.
For each timestep, we solve a convex programming problem to find if there exists a solution for contact force λ_c to satisfy the force conditions. The problem is formulated as follows:
[ min_λϵλ^T λ; s.t. quasistatic or quasidynamic condition ]
where ϵλ^T λ is a regularization term on the contact forces.
The quasi-static condition requires the object to be under static force balance for a selected contact mode
[ G_1 h_1, G_2 h_2, … ]·[ λ_1, λ_2, … ]^T + F_external = 0
where [ λ_1, λ_2, … ]^T are the magnitudes of forces along active contact force directions [ h_1, h_2, … ]^T determined by contact modes. [ G_1, G_2, … ]^T are the contact grasp maps.
F_external includes other forces on the object, such as gravity and other applied forces.
Quasidynamic assumption relaxes the requirement for objects to be in force balance, allowing short periods of dynamic motions. We assume accelerations do not integrate into significant velocities. In numerical integration, the object velocity from the previous timestep is 0. The equations of motions become:
M_o v̇^̇ȯ = [ G_1 h_1, G_2 h_2, … ]·[ λ_1, λ_2, … ]^T + F_external
In discrete time, the object acceleration v̇^̇ȯ can be written as v^o/h, where h is the step size. The object velocity v^o is computed by solving the constrained velocity from the current pose to the goal pose under a contact mode.
§.§.§ Feasibility Checks
* Task mechanics check: is passed if there exist a solution for <ref>.
* Finger relocation check: during relocation, the non-relocating robot contacts and environment contacts must also satisfy the task mechanics, assuming the object has zero velocity.
* Collision check: the spheres must not collide with the environment.
§.§.§ Features
We manually designed the features, as shown in Table <ref>.
§.§.§ Action Probability
In Level 1,
in choosing the next contact mode, we design the action probability to prioritize choosing the contact mode the same as before:
p(s1 = (x, mode),a ) =
0.5 if a = previous mode
0.5/# modes - 1 else
In Level 1, in choosing the next configuration, we let p(s1 = (x, config),a) be a uniform distribution for all the children and explore-new
In Level 2, in choosing a timestep to relocate and the contact points to relocate,
the action probability is calculated using a weight function w(s2, a) designed for each action in 𝒜_sp(s2):
p(s2, a) = w(s2, a)/∑_a' ∈𝒜_sp(s2) w(s2, a')
The manually designed weight function w(s2, a) prefers to let the previous robot contacts stay as long as possible:
w(s2, a) =
0.5 + 0.5/t_max - t_c + 1 if t_c = t_max
0.5/t_max - t_c + 1 else
§.§.§ Reward Design
We use all the features in Table <ref> and follow the logistic function fitting procedure as described in Appendix <ref>.
§.§.§ Value Estimation
We only use value estimation for Level 1 nodes. Each node has v_est = 0.1 if any subsequent Level 2 search is able to proceed past that node.
For all Level 2 nodes, the value estimation is simply zero.
§.§.§ Search Parameters
In both Level 1 and Level 2, we let the exploration rate η_1, η_2 = 0.1.
Since we only have value estimation for Level 1, there is only one adaptive parameter λ for Level 1 only.
When no reward > 0 has been found, λ = 0. After any positive reward is observed, λ = 1.
§.§ In-hand Manipulation
§.§.§ Robot model
The setup is the same as Appendix <ref>. The only difference is that we now have a workspace limit for each finger.
§.§.§ Task mechanics
We use quasi-static models (as described in Appendix <ref>) or force closure <cit.>.
§.§.§ Feasibility Check
includes workspace limit check for fingertips, task mechanics check, and finger relocation check.
Features, action probability, reward, value estimation, and search parameters are the same as the Manipulation with Environment Interactions task.
§.§ Robot Experiment: Dexterous DDHand
§.§.§ Dexterous DDHand Overview
Dexterous DDHand is a direct-drive hand with 4 Dofs. It has two fingers and each finger has 2 Dofs for planar translation motions. Each fingertip is a horizontal rod. As a result, we use two endpoints of the rod to approximate the line contact.
We provide the planner with the forward and inverse kinematics of the hand. We also provide a contact relocation planner, which follows the object surface (5mm above the object surface) and goes to the new contact location.
§.§.§ Feasibility Checks
include inverse kinematics check, collision check, finger relocation force check, finger relocation path check (are there collisions on the relocation path), and task mechanics check.
Task mechanics, features, action probability, reward, value estimation, and search parameters are the same as the Manipulation with Extrinsic Dexterity task.
§.§.§ Execution
Given a planned fingertip trajectory, we compute the robot joint trajectory using inverse kinematics and execute it with robot joint position control.
In order to ensure some contact force, we shift the end-effector trajectory in the environment contact normal direction for
Δposition = Desired contact force/Stiffness
where the stiffness can be tuned due to the direct-drive property.
The execution was conducted in an open-loop manner, meaning that there was no object pose estimation or force control involved.
The system was calibrated to ensure that the initial object pose errors are kept within a tolerance of 1 mm. We chose not to provide a formal success rate in our report since this number lacks significance due to its dependency on the accuracy of our manual calibration process. However, as a point of reference, with an initial pose precision of 1 mm, we estimate a success rate of approximately 4 out of 5 attempts.
§.§ Robot Experiment: Delta Array
§.§.§ Delta Array System Overview
The array of soft delta robots is a research platform for the development of multi-robot cooperative dexterous manipulation skills. The system is comprised of 64 soft linear delta robots arranged in an 8x8 hexagonal tessellating grid. Each 3D printed soft delta linkage is actuated using 3 linear actuators to give 3 degrees of translational freedom with a workspace of 3.5cm radius in the X, and Y axes and 10cm in Z-axis. The links are compliant with high elasticity and low hysteresis, with a soft 3D printed fingertip-like end-effector attached to it. We simplify the workspace of each delta robot to be a cylinder with a 2.5cm radius and 6cm height.
We provide the forward and inverse kinematic models to the planner. While running the planner, the IK check is simplified to a workspace limit check (if the contact point is in the cylinder workspace). We only perform collision checks for the fingertips, not the links. While doing the actual execution of the plans, we use inverse kinematics to calculate the robot joint trajectory from the contact point trajectory. In order to ensure some contact force, we shift the end-effector trajectory in the same way as <ref>, where the stiffness is manually calibrated.
We relocate contacts by letting the delta robot to leave the contact in the contact normal direction, go around the edge of the workspace, and come to the new contact in its normal direction.
The entire plan is executed in open-loop. Although delta robots may not offer a high level of accuracy and repeatability, their passive compliance allows for minor deviations from the planned trajectory to be accommodated.
§.§.§ Feasibility Check
include workspace limit check, collision check, task mechanics check.
Task mechanics, features, action probability, reward, value estimation, and search parameters are the same as the Manipulation with Extrinsic Dexterity task.
§ RRT FOR ROLLLOUT
The RRT process is summarized in Algorithm <ref>.
The inputs are the current object pose x_current, selected contact mode m_selected, and the object goal pose x_goal. If it can find a solution, it outputs a trajectory from x_current to x_goal. Every point on the trajectory is (x, m), where x ∈SE(3) is an object pose, m is an environment contact mode.
At each iteration, sample-random-object-pose sample a new object pose x_extend∈SE(3).
We find the nearest neighbor x_near of x_extend, and attempt to extend it towards x_extend (line 5 - 15, Algorithm <ref>). Each extension is performed under the guidance of a contact mode. If x_near happens to be x_current, we let the contact mode be m_selected chosen by Level 1 MCTS. Otherwise, the function select-contact-mode will select the contact mode(s) to perform the extension under. The procedure extend-with-contact-mode extends x_near towards x_extend under the guidance of a selected contact mode m through projected forward integration.
Next, we explain all the functions in detail.
sample-random-object-pose sample a new object pose x_extend∈SE(3). The probability of x_extend being the goal pose is p_sample, while the probability of it being a random object pose in SE(3) is 1 - p_sample. We can specify the range limit for the random sample of the object pose.
nearest-neighbor finds the closest object pose to x_extend in the tree. The distance between two object poses is computed as w_t * d_t + w_r * d_r. w_t and w_r are the weights for translation and rotation. d_t is the Euclidean distance between their locations, and d_r is the angle difference between two rotations. A simple way to compute d_r is to first compute the rotation between two poses R_diff = R_1R_2^T, and convert R_diff to axis-angle representation and let d_r be equal to the angle. In general, the users need to adjust the weights according to how important object orientation or position is important in the task. Not much tuning is needed.
In our experiment, we scale the object sizes such that the average length of their bounding boxes is about 1 to 10. In this case, one can set the weights using this rule: normal (1), not very important (0.5), not important at all (0.1).
select-contact-mode first enumerates all contacting-separating contact modes, then filters out infeasible mode through extend-feasibility-check, and finally returns the set of all feasible modes.
extend-feasibility-check has two options in implementation. The first option involves storing the robot contacts during the search process. If the current robot contacts pass the feasibility check in Section <ref>, the check is deemed successful. However, if they fail, the check can still be considered successful if a sampled set of feasible robot contacts can be generated, ensuring a feasible transition from the current contacts. The current contacts are then updated accordingly.
The second option finds if there exists a set of robot contacts that satisfy the feasibility check. Unlike the first option, this method does not retain information about robot contacts. Instead, it is consider successful if it can sample any set of robot contacts, as long as they pass the feasibility check. The second option is more relaxed as it does not take into account previous robot contacts and transitions.
extend-with-contact-mode extends x_near towards x_extend as much as possible under constraints posted by contact mode m.
velocity-under-mode solves for the object velocity that get x_near as close as possible to x_extend with respect to the velocity constraints introduced by m. We then integrate the object pose for a small step in the direction of constrained object velocity, and project the new object pose back to the contacts that needed to be maintained.
In project-to-contacts-maintained, the contact mode m needs to be maintained. We first perform contact detection on the object. We then project the object pose back to where the maintaining contacts in m have zero signed distances.
|
http://arxiv.org/abs/2307.01729v1
|
20230704135741
|
The role of thermal fluctuations in sound propagation in a two-dimensional Fermi gas
|
[
"Krzysztof Gawryluk",
"Mirosław Brewczyk"
] |
cond-mat.quant-gas
|
[
"cond-mat.quant-gas"
] |
We numerically study the transport properties of a two-dimensional Fermi gas in a weakly and strongly interacting regimes, in the range of temperatures close to the transition to a superfluid phase. For that we excite sound waves in a fermionic mixture by using the phase imprinting technique, follow their evolution, and finally determine both their speed and attenuation. Our formalism incorporates thermal fluctuations via the ground canonical ensemble description and with the help of Metropolis algoritm. From numerical simulations we extract temperature dependence of the sound velocity and diffusivity as well as the dependence on the interaction strength. We emphasize the role of virtual vortex-antivortex pairs creation in the process of sound dissipation.
Mechanism for sound dissipation in a two-dimensional degenerate Fermi gas
Krzysztof Gawryluk and Mirosław Brewczyk
Received X, 2023; accepted X, 2023
=========================================================================
§ INTRODUCTION
Exciting sound waves within a substance and studying their propagation allows for the exploration of its equilibrium and dynamical properties.
The characteristics like the compressibility and viscosity become available via measuring the speed of sound waves and their attenuation.
The propagation of sound in harmonically trapped and homogeneous systems of ultracold bosonic and fermionic atomic gases has been studied experimentally by many groups <cit.>. In a recent work on a uniform two-dimensional weakly interacting Bose gas a collisionless sound, moving with velocity close to the Bogoliubov sound speed, was observed below and above the critical temperature for the Berezinskii-Kosterlitz-Thouless (BKT) transition <cit.>. This phenomenon was studied in and confirmed by several theoretical papers <cit.>. In Ref. <cit.>, the first and the second sound modes were observed for the first time in 2D Bose gas. Based on temperature dependence of sound speeds the superfluid density of bosonic gas was determined. The superfluid density revealed a universal jump at the critical temperature, in accordance with the BKT theory and as already proved numerically for a uniform two-dimensional atomic Bose gas in Ref. <cit.>. A single density wave was observed also in a strongly interacting atomic Fermi gas at temperatures below the superfluid transition temperature <cit.>. The damping of the sound mode was measured and it was found that the damping is minimal in the strongly interacting regime. For the unitary regime the diffusivity approaches the universal quantum limit which is ħ/m, where m is the mass of an atom. The results of Ref. <cit.> demonstrate that the unitary regime distinguishes from the BCS/BEC sides of the BEC-BCS crossover with respect to the dissipation properties. The experimental data presented in <cit.> show that the diffusivity achieves a universal quantum limit also in a three-dimensional homogeneous atomic Fermi gas at the unitarity and below the superfluid transition temperature. When the temperature gets higher the diffusivity increases monotonically.
In a 3D superfluid quantized vortex lines cost a macroscopic energy which is proportional to their length. Due to this, the thermal creation of vortex lines is not possible and vortices can be ignored in all thermodynamic considerations. The situation is significantly different in 2D where vortices can be thermally excited. Even in 2D, however, vortices behave differently from ordinary elementary excitations, since their energy depends logarithmically on the area occupied by the system. As a result, vortices can be thermally created only at temperatures larger than a critical one, T_BKT.
In this paper we focus on the sound propagation in two-dimensional Fermi-Fermi mixtures at low temperatures (but still above the superfluid transition temperature) in a weakly and strongly interacting regimes of the BEC-BCS crossover. We find both the speed and the damping of sound waves. The damping turns out to be minimal in the unitary regime and the diffusivity gets values close to the universal quantum limit. Opposite is true on the BCS/BEC sides of the crossover where the diffusivity becomes very large.
The paper is organized as follows. In Sec. <ref> we introduce the model of a two-component two-dimensional degenerate Fermi gas at temperatures above the transition to a superfluid phase. Then (Sec. <ref>) we describe the numerical experiment in which we excite the sound waves in the system, allow them to propagate, and at the end determine their speed and attenuation. In Sec. <ref> we reveal the mechanism of sound dissipation which turns out to be related to creation of virtual vortex-antivortex pairs in the gas. Finally, we conclude in Sec. <ref>.
§ METHOD
We start with a simple description of a two-component fermionic mixture in terms of semiclassical distribution functions, f_p^±(r), already detailed in Ref. <cit.>. Here, indices "+" and "-" distinguish components and the equilibrium, at a given temperature T and a chemical potential μ, semiclassical distributions are f_p^±(r)=(exp[ε_p^±(r)-μ_±]/k_B T+1)^-1. The particle energy ε_p^±(r) at position r includes the kinetic energy as well as the potential energy related to external trapping and interactions. According to the meaning of semiclassical distribution functions the integrals ∫ f_p^±(r) dp and ∫ (p^2/2m) f_p^±(r) dp represent the atomic densities and atomic local motion (intrinsic) energies in both components. We further assume that we have a two-dimensional mixture of the same species fermionic atoms confined in a box potential. The atomic and the local motion energy densities then are n_±(r)=(1/λ^2) ln(1+z_±(r)) and ε_±(r)=(k_B T/λ^2) f_2 (z_±(r)), where λ=√(2πħ^2/m k_B T) is the thermal wavelength, z_±(r) are the extended fugacities, and f_2(z) is one of the standard functions for fermions. The free energy functional (which, according to Ref. <cit.>, substitutes the energy functional in the case of nonzero temperatures) of a whole system can be written as
F_tot(n_+,n_-,v_+,v_-) = ∫[ f_+(r) dr + f_-(r) ] dr
+ ∫( n_+ 1/2 m v_+^2 + n_- 1/2 m v_-^2 ) dr
+ V_int(n_+,n_-) .
The energy depends on atomic densities, n_±(r), and macroscopic velocity fields, v_±(r), of both fermionic gases. The densities of a local free energy of a two-dimensional (2D) gas are
f_±(r) = k_B T/λ^2[ (lnz_±) ln(1+z_±) - f_2(z_±) ] ,
where z_±(r)=exp(μ_± - δ V_int/δ n_± )/ k_B T). The second integral in Eq. (<ref>) represents the energy of a macroscopic flow while the last one, V_int(n_+,n_-), describes interaction between components.
It is convenient to represent the density and velocity fields by a single complex field ψ_±(r) introduced via inverse Madelung transformation. The density and velocity fields are related to the pseudo-wave functions ψ_±(r) as n_±=|ψ_±|^2 and v_±=(ħ/m) ∇ϕ_±, where ϕ_±(r) are the phases of functions ψ_±(r). Then the density of a macroscopic flow energy can be expressed in the following way
-ħ^2/2mψ_±^* ∇^2 ψ_± - ħ^2/2m
(∇ |ψ_±|)^2 = n_±1/2 m v_±^2
and the functional Eq. (<ref>) is transformed to
F_tot(ψ_±,∇ψ_±) = ∫[ f_+(r) dr + f_-(r) ] dr
+ ∫[-ħ^2/2mψ_+^* ∇^2 ψ_+ - ħ^2/2m (∇ |ψ_+|)^2 ] dr
+ ∫[-ħ^2/2mψ_-^* ∇^2 ψ_- - ħ^2/2m (∇ |ψ_-|)^2 ] dr
+ V_int(n_+,n_-) .
The equations of motion corresponding to the functional Eq. (<ref>) are found in a usual way as
i ħ (∂/∂ t) ψ_±( r,t) = (δ/δψ_±^*)
F_tot(ψ_±, ∇ψ_±) and are given by
i ħ ∂ψ_±/∂ t = [ -ħ^2/2m∇^2 + ħ^2/2m∇^2 |ψ_±|/|ψ_±| +
k_B T lnz_±.
+ . δ V_int/δ n_±] ψ_± .
While evolving pseudo-wave functions according to Eqs. (<ref>), the extended fugacities z_±(r) are found from the self-consistency condition n_±=(1/λ^2) ln(1+z_±) with n_±=|ψ_±|^2.
The interaction between components is treated within the beyond mean-field approach, the lowest-order constrained variational method <cit.>. The interaction energy of a uniform balanced system of density n (per component) is calculated as E_int/V = (ħ^2/m) n^2 A(η), where A(η) is a function of dimensionless parameter η = ln(k_F a_2D) with k_F=√(4π n) being the Fermi momentum – see Appendix <ref> for a derivation of the interaction energy term within the lowest-order constrained variational method (LOCV). The numerically determined function A(η) for two lowest energy branches (attractive and repulsive ones) is plotted in Fig. <ref> in Appendix <ref> for the parameter η scanning the BCS to BEC crossover. Since the relative population of two lowest branches is exp[-(A_2(η)-A_1(η)) T_F/(2π T)], where T_F is the Fermi temperature and A_1(η) (A_2(η)) determines the pair energy in the lower (higher) branch, it is clear that mostly lower branch is occupied for the low temperatures we consider (i.e., T/T_F<0.2).
Now we introduce thermal fluctuations into our description. The grand canonical ensemble of pseudo-fields ψ_± fulfilling Eqs. (<ref>) can be obtained from the free energy functional Eq. (<ref>) by using the Metropolis algorithm <cit.>. At a given temperature T and a chemical potential μ, the probability of having a system with N particles and the free energy equal to F(N), calculated within the grand canonical ensemble, is e^μ N /k_B T e^-F(N)/k_B T. For our system the free energy F(N) is given by Eq. (<ref>), where N is the total number of atoms. The fields ψ_± are determined by expanding them in a particular set of basis functions. In our case a uniform spatial grid with a given step Δ is used (i.e. both fields are expanded in a set of Dirac delta functions). The maximal energy available on a grid with spatial step Δ equals ħ^2 (π/Δ)^2/2m. This, introduced by discretization, cutoff energy should allow to include all energy-relevant many-body states while generating the grand canonical ensemble. For noninteracting mixture of Fermi gases it is meaningful to take the cutoff energy, due to thermal broadening of the Fermi-Dirac distribution, as E_cut=E_F + α k_B T, where E_F=2π (ħ^2/m) n is the Fermi energy and α≳ 1 (we take α=5). For interacting gases we include the interaction-related chemical potential and modify the cutoff as E_cut→ E_cut + δ V_int/δ n. Fig. <ref> in Appendix <ref> shows the cutoff energy as a function of η = ln(k_F a_2D) for the mixture consisted of ⟨ N_±⟩ =1500 atoms at the temperature T/T_F=0.2.
Fig. <ref> suggests that our description of fermionic mixture breaks down for negative values of interaction parameter η when the system enters the BEC phase of the crossover and the mixture should be described rather in terms of composite bosons. However, an alternative way to achieve the proper cutoff energy exists. Since the total chemical potential is μ_tot(η) ≈ E_F + μ_int(η) (and μ_int=δ V_int/δ n, shown in the inset of Fig. <ref>), we search for the appropriate grid via generating the grand canonical ensemble of pseudo-fields ψ_± at the condition ⟨ N_±⟩ =1500 for each interaction strength. The cutoff energy found in this way remains close to that obtained as described earlier for any interaction strength provided the absolute value |E_F + α k_B T + δ V_int/δ n| is used as the definition for a cutoff energy.
§ SOUND WAVES PROPAGATION
To excite sound waves we disturb the mixture of fermionic atoms with the protocol similar to that applied in experimental work of Ref. <cit.>. Namely, we imprint a phase profile on the pseudo-wave functions ψ_±(x,y) →ψ_±(x,y) exp^i ϕ(x) along x direction, where ϕ(x)=ϕ_0 exp[-(x-0.3 L)^2/σ^2] (see Fig. <ref>). Here, ϕ_0 determines the strength of the phase disturbance, L is the size of a two-dimensional atomic system, and σ=L/13 gives the width of the region with imprinted phase. This disturbance results in increase in the total energy of the system Δ E=E_imp-E_ini, which depends on the particular value of ϕ_0 and on the temperature T. For example, Δ E=16 ħ^2/m L^2 for the smallest perturbation ϕ_0=0.27 π, Δ E=28 ħ^2/m L^2 for ϕ_0=0.8 π, and Δ E=50 ħ^2/m L^2 for the strongest perturbation considered ϕ_0=1.3 π, for T=0.15 T_F and η=1 (here, E_ini=2151 ħ^2/m L^2). Note that we use a profile as in Fig. <ref> with a symmetrical, double phase jumps (we work with periodic boundary conditions and the care about phase continuity should be taken). Both phase jumps convert to the density perturbations and result in an appearance of dark and white soliton-like structures traveling in opposite directions, see Refs. <cit.>. In our case the density structures are hidden inside a thermal noise, so they are almost not detectable from density profiles. To overcome this problem we repeat the phase imprinting procedure on many grand canonical microstates (typically 200) and average the density over all realizations. As a result we obtain clear picture of a density response of the system to the initial perturbation. To see density time evolution on a single plot we also integrate two-dimensional density along one spatial coordinate (perpendicular to the direction of sound wave motion) obtaining time-dependent density pictures like in Fig. <ref>.
Typical density evolution after phase imprinting is shown in Fig. <ref>. One can see two wave forms propagating in the opposite directions with respect to the region where the phase profile has the maximum. In fact, each of this structures is made of a dark and a white quasisoliton formed from opposite slopes of the phase profile (they are close to each other and almost form a single entity). We could change this behavior by making the width of a phase profile wider, but then we would see in fact four distinguishable structures propagating in the system, which differs from what is seen in the experiment of Ref. <cit.>. Also note that the propagating sound mode does not bounce from the walls since we use periodic boundary condition – instead the signal is propagating from the other side after reaching the "edge". As in the experiment <cit.>, the initial signal is traveling with constant velocity (we work in a weak disturbance regime) and is damped while propagating, and finally disappears.
To analyze a sound signal we determine its velocity (v) and the diffusion coefficient (D). For that we calculate the density imbalance defined as Δ n(t) = (n_l - n_r)/(n_l + n_r), where n_l (n_r) is density in the left (right) halves of the potential box, as a function of time, just like in Ref. <cit.>. In the other way we integrate the density along the transverse direction to the propagation and Fourier decompose as n(x,t)=⟨ n ⟩ + ∑_j=± 1,± 2,... A_j(t) exp(j 2π x/L), and look at the time-dependence of the Fourier coefficient A_1(t) (in fact, A_1(t)/ max[A_1(t)]) as in Ref. <cit.>. Then we fit both the density imbalance and the A_1(t) coefficient to an exponentially damped periodic function of the form: a exp(-Γ t/2)sin(ω t + φ) + b t+c (linear part of the fit is used to neglect influence of constant drift of a signal), see Fig. <ref>. We calculate the velocity as v=L ω/(2π) and the diffusion coefficient as D=L^2 Γ/(2π)^2 <cit.>. Generally, both methods give close results, and only in few cases they differ or fail to fit the data (only matching results are accepted).
Densities presented in Fig. <ref> are smooth enough to obtain values of Γ and ω, but still these values change depending on the realization, especially for the highest temperature analyzed. This is why we analyzed three sets of real time evolution of the mixture after the phase imprinting, each including 200 realizations (microstates) from the grand canonical ensemble. This allow us to have more data to present, and add the variance as additional characteristics. Results are summarized in Figs. <ref> and <ref>. Two features of dissipation mechanism are clearly visible. The dissipation increases with temperature. It is true for the whole BEC-BCS crossover. Similar observation is reported in <cit.>, where a three-dimensional Fermi mixture is studied in the unitary regime. In two-dimensional system, however, the diffusivity grows with temperature strongly faster. The reason for that is explained in the next section. The sound dissipation also depends on the interaction strength. It is the weakest in the unitary regime when the diffusivity approaches the quantum limit of ħ/m at the lowest temperature. Similar behavior of diffusivity coefficient was observed in experiment <cit.> performed at temperatures below the superfluid transition temperature. As shown in the next section, the results presented in Figs. <ref> and <ref> can be understood via the dissipation mechanism based on virtual creation of vortex-antivortex pairs.
§ MECHANISM FOR SOUND DISSIPATION
Strong damping of sound waves on the BCS/BEC side of the BEC-BCS crossover is strictly related to the dimensionality of the system we study. In two-dimensional case the spectrum of elementary excitations becomes rich, qualitatively new modes appear with respect to what occurs in three-dimensional systems. For example, in two-dimensional superfluids, as opposed to three-dimensional ones, quantized vortices can be thermally excited. Since the vortex energy depends logarithmically on the area occupied by the system, its creation is possible only at high enough temperatures, higher than the critical temperature for the Berezinskii-Kosterlitz-Thouless phase transition. However, for lower temperatures it is still energetically allowed to excite pairs of opposite charge vortices.
Our hypothesis is that the strong damping of sound waves traveling in a fermionic mixture is caused by the appearance of vortex-antivortex pairs in each fermionic component. It is then expected that all fermions in a component, as described generally by individual orbitals, locally share the phase. Then locally each Eq. (<ref>) becomes one to one correspondence to the set of the Hartree-Fock equations for a spin-polarized fermions (see Ref. <cit.> for a derivation). In Fig. <ref> we show a sequence of time snapshots presenting positions of vortex-antivortex pairs in the "+" component while evolving a particular grand canonical microstate for ⟨ N_+⟩=1500, η=3, and at the temperature T/T_F=0.15. From Fig. <ref>, which demonstrates a typical dynamic behavior of fermionic components at nonzero temperatures, it is clear that vortex pairs appear on a time scale of 10^-6 mL^2/ħ. Careful analysis of lifetimes of vortex-antivortex pairs leads to the histogram, Fig. <ref>. Since the energy of vortex pairs is of the order of k_B T, the product of their energy and a lifetime is of the order of 10^-3ħ, thus breaking the Heisenberg uncertainty principle. The vortex-antivortex pairs must be then the virtual pairs.
Hence, the attenuation of sound modes observed in the simulations is caused by the scattering on virtual vortex-antivortex pairs. The number of virtual vortex pairs is large on the BCS side of the BEC-BCS crossover and the diffusivity is of the order of ∼ 10 ħ /m, see Fig. <ref> for temperatures T/T_F>0.1. Going to unitary regime the number of vortex pairs is significantly decreased and the attenuation coefficient gets smaller, for lower temperatures it takes values about 1 ħ /m, the universal quantum limit. On the BEC side of the crossover the diffusivity again gets large, Fig. <ref>.
Finally, to uniquely indicate the origin of sound dissipation we generated the grand canonical ensemble of microstates by enforcing constant phase for each pseudo-wave function ψ_±. The resulting fields ψ_± are still fluctuating density fields. After exciting a sound wave in a fermionic mixture we find that the sound propagates through a medium almost without damping, thus leading to diffusivity D ≈ 1 ħ /m close to the quantum limit both on BCS/BEC sides of crossover and at the unitarity, see Fig. <ref>. Hence, the virtual vortex-antivortex pairs must be responsible for strong dissipation of sound waves.
§ SUMMARY
In summary, we have studied the propagation of sound waves in a weakly and strongly interacting two-dimensional Fermi gas, in the range of temperatures close to the transition to a superfluid phase. We find numerically the dependence of the sound velocity and sound diffusivity on both the temperature and interactions. The sound diffusivity monotonically increases while the temperature is getting higher, independently of the interactions. At constant temperature the damping of sound takes the lowest values at the unitarity, where at temperatures close to the superfluid transition the diffusivity approaches the universal quantum limit. We identify that scattering on virtual vortex-antivortex pairs is responsible for strong dissipation of sound waves.
Part of the results were obtained using computers at the Computer Center of University of Bialystok.
§ LOCV APPROXIMATION IN 2D
The Hamiltonian of a uniform two-component two-dimensional degenerate Fermi gas is given by
H = - ħ^2/2 m∑_i=1^N_+∇_i^2
- ħ^2/2 m∑_j=1^N_-∇_j^2
+ ∑_i=1^N_+∑_j=1^N_- V_FF( x_i - y_j) ,
where V_FF is a zero-range pseudopotential that lead to the scattering length a_2D. The many-body ground state of H is approximated by a Jastrow-Slater variational wave function
|Ψ_JS⟩ = ∏_i,j f( x_i - y_j) |Ψ_S^+⟩ |Ψ_S^-⟩ ,
where |Ψ_S^±⟩ is the Slater determinant wave function of a component consisting of N_± fermions and f( r) is the Jastrow function describing the two-body correlations between interacting fermions. The pair correlation function, which is assumed to be spherically symmetric, is determined variationally by minimizing the average value of the energy ⟨Ψ_JS| H | Ψ_JS⟩ / ⟨Ψ_JS|Ψ_JS⟩. Within the LOCV approximation the correlation function fulfills the Schrödinger equation
-ħ^2/m(d^2f/dr^2 + 1/rdf/d r) + V_FF(r) f(r) = λ f(r)
in a space region defined by r<d. The healing length d, which is of the order of an average atomic separation, is determined self-consistently from the conditions f(r > d) = 1 and f'(r = d) = 0. For r > d, f(r) tends to unity and the correlations disappear from (<ref>). In the LOCV method the interaction energy E_int/N is approximated by
E_int/N = 2π n λ∫_0^d r |f(r)|^2 dr
and d is chosen such that
2π n λ∫_0^d r |f(r)|^2 dr = 1 ,
where n is the atomic density. Hence the interaction energy is E_int/N = λ (on average there are only correlated pairs). For the scattering state (the second lowest branch) λ = ħ^2 k^2/ m > 0 and in the noninteracting case the solution of Eq. (<ref>) is
f(r) ∝ a(k) J_0(k r) + b(k) Y_0(k r) ,
where a(k) and b(k) are coefficients set by the boundary conditions and J_0(k r) and Y_0(k r) are Bessel functions of the first and second kinds, respectively. The two-dimensional contact interactions can be introduced by imposing the Bethe-Peierls boundary conditions at r=0
( r d/dr - 1/ln(r/a_2D)) f(r) r → 0⟶ 0 ,
which gives <cit.>
f(r) ∝ { J_0(k r) - π/2[γ + ln(k a_2D/2)] Y_0(k r) } ,
where γ≈ 0.577 is the Euler’s constant. Since f(r=d)=1, the constant of proportionality equals
{ J_0(k d) - π/2[γ + ln(k a_2D/2)] Y_0(k d) }^-1 .
To determine the healing length (d) and the interaction energy per particle (∼ k^2), the constraint (<ref>) and the condition f'(r = d) = 0 have to be used. The second condition yields
J_1(k d) = π/2[γ + ln(k a_2D/2)] Y_1(k d) .
Set of nonlinear equations (<ref>) and (<ref>) can be solved numerically giving quantities (k,d) for each value of 2D interaction parameter η = ln(k_F a_2D), where k_F=√(4π n) is the Fermi momentum of a two-dimensional single-component Fermi gas of density n. Similar considerations can be repeated for the bound state (the lowest energy branch), just by making a replacement k → i κ (then λ = -ħ^2 κ^2/ m < 0).
Now, the interaction energy density of each uniform component, calculated within the LOCV approximation, is E_int^±/V=λ n_±=4π (ħ^2/m) n_± n_∓ (k/k_F^∓)^2. Hence, the total interaction energy density is E_int/V=(E_int^+/V + E_int^-/V) /2 = (ħ^2/2 m) n_+ n_- [A(η_+) + A(η_-)], where A(η) = 4π (k/k_F)^2. The function A(η) is shown in Fig. <ref> for two lowest energy branches. Applying the local density approximation the interaction energy functional of a two-component two-dimensional Fermi gas can be written as
V_int = ∫ħ^2/2 m n_+( r) n_-( r) [ A(η_+( r)) + A(η_-( r)) ] d r .
Then the contribution to equations of motion, Eqs. (<ref>), due to interactions (the chemical potential μ_int) becomes
δ V_int/δ n_± = ħ^2/2 m n_∓[ A(η_+) + A(η_-) + 1/2d A(η)/d η|_η_±]
and the cutoff energy for a spin-balanced mixture consisted of ⟨ N_±⟩ =1500 atoms at the temperature T/T_F=0.2 as a function of 2D interaction parameter is plotted in Fig. <ref>. The inset depicts the chemical potential μ_int.
99
Ketterle1997 M.R. Andrews, D. M. Kurn, H.-J. Miesner, D.S. Durfee, C. G. Townsend, S. Inouye, and W. Ketterle, Phys. Rev. Lett. 79, 553 (1997).
Hoefer06 M.A. Hoefer, M.J. Ablowitz, I. Coddington, E.A. Cornell, P. Engels, and V. Schweikhard, Phys. Rev. A 74, 023623 (2006).
Joseph07 J. Joseph, B. Clancy, L. Luo, J. Kinast, A. Turlapov, and J.E. Thomas, Phys. Rev. Lett. 98, 170401 (2007).
Chang08 J.J. Chang, P. Engels, and M.A. Hoefer, Phys. Rev. Lett. 101, 170404 (2008).
Meppelink09 R. Meppelink, S.B. Koller, and P. van der Straten, Phys. Rev. A 80, 043605 (2009).
Sidorenkov13 L.A. Sidorenkov, M.K. Tey, R. Grimm, Y.H. Hou, L. Pitaevskii, and S. Stringari, Nature (London) 498, 78 (2013).
Ville18 J.L. Ville, R. Saint-Jalm, É. Le Cerf, M. Aidelsburer, S. Nascimbène, J. Dalibard, and J. Beugnon, Phys. Rev. Lett. 121, 145301 (2018).
Patel20 P.B. Patel, Z. Yan, B. Mukherjee, R.J. Fletcher, J. Struck, and M.W. Zwierlein, Science 370, 1222 (2020).
Bohlen20 M. Bohlen, L. Sobirey, N. Luick, H. Biss, T. Enss, T. Lompe, and H. Moritz, Phys. Rev. Lett. 124, 240403 (2020).
Hadzibabic21 P. Christodoulou, M. Gałka, N. Dogra, R. Lopes, J. Schmitt, and Z. Hadzibabic, Nature (London) 594, 191 (2021).
Ota18 M. Ota, F. Larcher, F. Dalfovo, L. Pitaevskii, N.P. Proukakis, and S. Stringari, Phys. Rev. Lett. 121, 145302 (2018).
Cappellaro18 A. Cappellaro, F. Toigo, and L. Salasnich, Phys. Rev. A 98, 043605 (2018).
Singh20 V.P. Singh and L. Mathey, Phys. Rev. Res. 2, 023336 (2020).
Gawryluk21 K. Gawryluk and M. Brewczyk, Sci. Rep. 11, 10773 (2021).
Gawryluk19 K. Gawryluk and M. Brewczyk, Phys. Rev. A 99, 033615 (2019).
Ryszkiewicz22 J. Ryszkiewicz, M. Brewczyk, T. Karpiuk, Phys. Rev. A 105, 023315 (2022).
Kohn65 W. Kohn and L.J. Sham, Phys. Rev. 140, A1133 (1965).
Pandharipande73 V.R. Pandharipande and H.A. Bethe, Phys. Rev. C 7, 1312 (1973).
Pandharipande77 V.R. Pandharipande and K.E. Schmidt, Phys. Rev. A 15, 2486 (1977).
Cowell02 S. Cowell, H. Heiselberg, I.E. Mazets, J. Morales, V.R. Pandharipande, and C.J. Pethick, Phys. Rev. Lett. 88, 210403 (2002).
Taylor11 E. Taylor, S. Zhang, W. Schneider, and M. Randeria, Phys. Rev. A 84, 063622 (2011).
Yu11 Z.-Q. Yu, S. Zhang, and H. Zhai, Phys. Rev. A 83, 041603(R) (2011).
Grochowski20 P.T. Grochowski, T. Karpiuk, M. Brewczyk, and K. Rza̧żewski, Phys. Rev. Lett. 125, 103401 (2020).
Witkowska10 E. Witkowska, M. Gajda, and K. Rza̧żewski, Opt. Commun. 283, 671 (2010).
Gawryluk17 K. Gawryluk, M. Brewczyk, and K. Rza̧żewski, Phys. Rev. A 95, 043612 (2017).
Pietraszewicz17 J. Pietraszewicz, E. Witkowska, and P. Deuar, Phys. Rev. A 96, 033612 (2017).
Karpiuk02a T. Karpiuk, M. Brewczyk, and K. Rza̧żewski, J. Phys. B 35, L315 (2002).
Karpiuk02b T. Karpiuk, M. Brewczyk, Ł. Dobrek, M.A. Baranov, M. Lewenstein, and K. Rza̧żewski, Phys. Rev. A 66, 023612 (2002).
Karpiuk20 T. Karpiuk, M. Gajda, and M. Brewczyk, New J. Phys. 22, 103025 (2020).
Whitehead16 T.M. Whitehead, L.M. Schonenberg, N. Kongsuwan, R.J. Needs, and G.J. Conduit, Phys. Rev. A 93, 042702 (2016).
Bertaina11 G. Bertaina and S. Giorgini, Phys. Rev. Lett. 106, 110403 (2011).
PitaevskiiStringari L. Pitaevskii and S. Stringari, Bose-Einstein Condensation (Oxford University Press, Oxford, 2003).
Ketterle1997 M.R. Andrews, D. M. Kurn, H.-J. Miesner, D.S. Durfee, C. G. Townsend, S. Inouye, and W. Ketterle, Phys. Rev. Lett. 79, 553 (1997).
Hoefer06 M.A. Hoefer, M.J. Ablowitz, I. Coddington, E.A. Cornell, P. Engels, and V. Schweikhard, Phys. Rev. A 74, 023623 (2006).
Chang08 J.J. Chang, P. Engels, and M.A. Hoefer, Phys. Rev. Lett. 101, 170404 (2008).
Meppelink09 R. Meppelink, S.B. Koller, and P. van der Straten, Phys. Rev. A 80, 043605 (2009).
Joseph07 J. Joseph, B. Clancy, L. Luo, J. Kinast, A. Turlapov, and J.E. Thomas, Phys. Rev. Lett. 98, 170401 (2007).
Sidorenkov13 L.A. Sidorenkov, M.K. Tey, R. Grimm, Y.H. Hou, L. Pitaevskii, and S. Stringari, Nature (London) 498, 78 (2013).
JBeugnon J.L. Ville, R. Saint-Jalm, É. Le Cerf, M. Aidelsburer, S. Nascimbène, J. Dalibard, and J. Beugnon, Phys. Rev. Lett. 121, 145301 (2018).
Christodoulou20 P. Christodoulou, M. Gałka, N. Dogra, R. Lopes, J. Schmitt, and Z. Hadzibabic, arXiv:2008.06044v1.
Ota18 M. Ota, F. Larcher, F. Dalfovo, L. Pitaevskii, N.P. Proukakis, and S. Stringari, Phys. Rev. Lett. 121, 145302 (2018).
Cappellaro18 A. Cappellaro, F. Toigo, and L. Salasnich, Phys. Rev. A 98, 043605 (2018).
Singh20 V.P. Singh and L. Mathey, Phys. Rev. Res. 2, 023336 (2020).
Ozawa14 T. Ozawa and S. Stringari, Phys. Rev. Lett. 112, 025302 (2014).
Ota18a M. Ota and S. Stringari, Phys. Rev. A 97, 033604 (2018).
Witkowska10 E. Witkowska, M. Gajda, and K. Rza̧żewski, Opt. Commun. 283, 671 (2010).
Grisins14 P. Gris̆ins and I.E. Mazets, Comput. Phys. Comm. 185, 1926 (2014).
Karpiuk15 T. Karpiuk, T. Sowiński, M. Gajda, K. Rza̧żewski, and M. Brewczyk, Phys. Rev. A 91, 013621 (2015).
Pietraszewicz15 J. Pietraszewicz and P. Deuar, Phys. Rev. A 92, 063620 (2015).
Gawryluk17 K. Gawryluk, M. Brewczyk, and K. Rza̧żewski, Phys. Rev. A 95, 043612 (2017).
jumpKGMB K. Gawryluk and M. Brewczyk, Phys. Rev. A 99, 033615 (2019).
review M. Brewczyk, M. Gajda, and K. Rza̧żewski, J. Phys. B 40, R1 (2007).
Prokofev01 N. Prokof'ev, O. Ruebenacker, and B. Svistunov, Phys. Rev. Lett. 87, 270402 (2001).
Huang K. Huang, Statistical Mechanics (Wiley, Delhi, 2014).
Zakharov V.E. Zakharov and A.B. Shabat, Zh. Eksp. Teor. Fiz. 64, 1627 (1973) [Sov. Phys. JETP 37, 823 (1973)].
Anderson01 B.P. Anderson, P.C. Haljan, C.A. Regal, D.L. Feder, L.A. Collins, C.W. Clark, and E.A. Cornell, Phys. Rev. Lett. 86, 2926 (2001).
Dutton01 Z. Dutton, M. Budde, C. Slowe, and L.V. Hau, Science 293, 663 (2001).
Feder00 D.L. Feder, M.S. Pindzola, L.A. Collins, B.I. Schneider, and C.W. Clark, Phys. Rev. A 62, 053606 (2000).
Tsuchiya08 S. Tsuchiya, F. Dalfovo, and L. Pitaevskii, Phys. Rev. A 77, 045601 (2008).
Ohya19 H. Ohya, S. Watanabe, and T. Nikuni, J. Low Temp. Phys. 196, 140 (2019).
Burger99 S. Burger, K. Bongs, S. Dettmer, W. Ertmer, K. Sengstock, A. Sanpera, G.V. Shlyapnikov, and M. Lewenstein, Phys. Rev. Lett. 83, 5198 (1999).
Fedichev99 P.O. Fedichev, E. Muryshev, and G.V. Shlyapnikov, Phys. Rev. A 60, 3220 (1999).
Verma17 G. Verma, U.D. Rapol, and R. Nath, Phys. Rev. A 95, 043618 (2017).
Singh17 V.P. Singh, C. Weitenberg, J. Dalibard, and L. Mathey, Phys. Rev. A 95, 043631 (2017).
Desbuquois12 R. Desbuquois, L. Chomaz, T. Yefsah, J. Léonard, J. Beugnon, C. Weitenberg, and J. Dalibard, Nat. Phys. 8, 645 (2012).
Kosterlitz73 J.M. Kosterlitz and D.J. Thouless, J. Phys. C 6, 1181 (1973).
Kosterlitz74 J.M. Kosterlitz, J. Phys. C 7, 1046 (1974).
Choi13 J.-y. Choi, S.W. Seo, and Y.-i. Shin, Phys. Rev. Lett. 110, 175302 (2013).
Hadzibabic06 Z. Hadzibabic, P. Krüger, M. Cheneau, B. Battelier, and J. Dalibard, Nature 441, 1118 (2006).
Chomaz15 L. Chomaz, L. Corman, T. Bienaimé, R. Desbuquois, C. Weitenberg, S. Nascimbène, J. Beugnon, and J. Dalibard, Nat. Commun. 6, 6162 (2015).
Seo17 S.W. Seo, B. Ko, J.H. Kim, and Y. Shin, Sci. Rep. 7, 4587 (2017).
Kwon14 W.J. Kwon, G. Moon, J.-y. Choi, S.W. Seo, and Y.-i. Shin, Phys. Rev. A 90, 063627 (2014).
Gauthier19 G. Gauthier, M.T. Reeves, X. Yu, A.S. Bradley, M.A. Baker, T.A. Bell, H. Rubinsztein-Dunlop, M.J. Davis, T.W. Neely, Science 364, 1264 (2019).
Johnstone19 S.P. Johnstone, A.J. Groszek, P.T. Starkey, C.J. Billington, T.P. Simula, K. Helmerson, Science 364, 1267 (2019).
Karl17 M. Karl and T. Gasenzer, New J. Phys. 19, 093014 (2017).
|
http://arxiv.org/abs/2307.01028v1
|
20230703140009
|
Extremal black string with Kalb-Ramond field via $α^{\prime}$ corrections
|
[
"Shuxuan Ying"
] |
hep-th
|
[
"hep-th",
"gr-qc"
] |
=1
|
http://arxiv.org/abs/2307.01558v1
|
20230704082205
|
Scalable variable selection for two-view learning tasks with projection operators
|
[
"Sandor Szedmak",
"Riikka Huusari",
"Tat Hong Duong Le",
"Juho Rousu"
] |
cs.LG
|
[
"cs.LG",
"cs.AI"
] |
Scalable variable selection for two-view learning tasks with
projection operators
Sandor Szedmak
Department of Computer Science
Aalto University
Espoo, Finland
Riikka Huusari
Department of Computer Science
Aalto University
Espoo, Finland
Tat Hong Duong Le
Department of Computer Science
Aalto University
Espoo, Finland
Juho Rousu
Department of Computer Science
Aalto University
Espoo, Finland
=====================================================================================================================================================================================================================================================================================================================================================================================================
In this paper we propose a novel variable selection method for two-view settings, or for vector-valued supervised learning problems.
Our framework is able to handle extremely large scale selection tasks, where number of data samples could be even millions.
In a nutshell, our method performs variable selection by iteratively selecting variables that are highly correlated with the output variables, but which are not correlated with the previously chosen variables.
To measure the correlation, our method uses the concept of projection operators and their algebra.
With the projection operators the relationship, correlation, between sets of input and output variables can also be expressed by kernel functions, thus nonlinear correlation models can be exploited as well.
We experimentally validate our approach, showing on both synthetic and real data its scalability and the relevance of the selected features.
Keywords: Supervised variable selection, vector-valued learning, projection-valued measure, reproducing kernel Hilbert space
§ INTRODUCTION
Vector-valued, or more generally structured output learning tasks arising from various domains have attracted much research attention in recent years <cit.>.
For both supervised but also unsupervised learning approaches, multi-view data has been of interest <cit.>.
Despite many successful approaches for various multi-view and vector-valued learning settings, including interpretability to these models has received less attention.
While there are various feature selection and dimensionality reduction methods either for scalar-valued learning tasks, or unsupervised methods for data represented in a single view <cit.>, there is scarcity of methods suitable for when data is represented in two views, or arises from a vector-valued learning task.
From the point of view of interpretability, especially feature selection methods are advantageous over dimensionality reduction since the relevant features are directly obtained as a result and not given only in (linear) combinations.
Recently, some feature selection methods have been proposed for structured output learning tasks.
<cit.> proposed kernel-based non-linear feature selection model relying on sparsity regularization.
Another supervised feature selection approach based on kernel methods is introduced in <cit.>, this one relying
instead on forward- and backward-selection ideology.
In addition, <cit.> discusses
feature selection in conjunction with kernel-based models, obtaining
sparsity implicitly via loss function without explicit
regularization term. An alternative, spline based, approach to the non-linear feature
selection is proposed by <cit.>.
These methods, relying on the kernel evaluations between data samples for both inputs and outputs, tend not to scale very well to large sample sizes.
In this paper, we introduce a novel variable selection approach for vector-valued, or two-view learning tasks, including CCA.
Our method is based on efficient iterative computation of projections of input variables to the vector space intersection between the space spanned by the output variables and the one of the previously selected input variables. In this space, the input variables are then selected by a correlation-based criterion.
Going one step further, we also exploit a kernel-based representation of the variables, allowing us to capture complex, nonlinear relationships. Here, we consider the kernelised representation of the variables instead of data samples – in essence, we model the co-variance on the features in the Hilbert space induced by the kernel. Notably, both input and output features are captured with the same kernelisation.
This is in stark contrast to other proposed kernel-based feature selection approaches in literature, where separate kernels are used for data samples in input and output spaces <cit.>.
We can more readily draw comparisons to canonical correlation analysis (CCA) and its kernelized version, where the correlations are computed between
two sets of variables instead
of pairs of individual ones <cit.>.
For many approaches, scalability in feature selection can be a challenge for when the data dimensionality is extremely large.
Some supervised linear feature selection models adapted to this setting are proposed in <cit.>. We note, that all these methods are for the supervised setting, but with scalar-valued output variables. While scalability w.r.t the feature dimensionality is often considered due to motivations arising from fat data, the scalability to large sample sizes is less focused on.
Traditionally, kernelized algorithms, while powerful, are very poorly scalable due to the dependence to the kernel matrix, especially if its inverse is required.
Contrary to the usual, by leveraging the recursive formulation of our algorithm and a trick with singular value decomposition on the variable representation, our approach is extremely scalable to large sample sizes - which we also demonstrate experimentally in Section <ref>.
To summarize, our main contributions in this paper are as follows:
* we propose
projective selection () algorithm, a novel approach for variable selection for vector-valued or two-view learning problems that is based on projection operators. In the result of the feature selection only depends on the subspace spanned by the outputs, not on the specific values (invariance).
* our proposed iterative method offers high scalability even for the kernelised formulation capturing non-linearities in the data, due to a trick with singular value decomposition applied to the feature representation.
* we experimentally validate the proposed approach, showing both relevance of the selected features and the efficiency of the algorithm.
The paper is organised as follows. In the next section we give overview of the method, before moving to more rigorous treatment in Section <ref>. There we give a brief introduction to projection operators and their matrix representation, and discuss the key detail of our approach, expressing the projector into intersection. We then move on to describing our large-scale kernelized adaptation of the algorithm in Section <ref>. We validate our approach experimentally in Section <ref> before concluding.
§ METHOD OVERVIEW
Our algorithm is designed to perform variable selection when there are multiple dependent variables of interest.
We denote the matrix containing the data from which the variables are selected as X∈ℝ^m× n_x, and the reference data as
Y∈ℝ^m× n_y – the sample size is m, and the number of features/variables are n_x and n_y (see other frequently used notation in Table <ref>). Here X and Y could also correspond to vector-valued inputs and outputs of some supervised learning task.
Our method is based on defining correlation via projection operators:
we define the correlation between a variable vector 𝐱∈ℝ^m (a column vector from 𝐗 containing the values of a single input variable for all data points) and a
set of variables in columns of matrix 𝐘, as
corr(x,Y) =
P_ℒ_Yx/||x|| =
⟨P_ℒ_Yx/||x||,P_ℒ_Yx/||x||⟩^1/2 =
⟨x/||x||,P_ℒ_Yx/||x||⟩^1/2
where P_ℒ_Y (or P_Y in shorthand) is the orthogonal projection operator into a subspace
ℒ_Y spanned by the columns of 𝐘.
This definition is motivated by the concept of Projection-Valued Measure which plays a significant role in quantum mechanics theory
(see for example <cit.>).
Our approach selects variables from input data 𝐗 iteratively, such that correlation between the selected variable and the outputs is high, while correlation to the previously selected variables is low.
For sake of simplicity, we assume that for all x∈ℝ^m,
x=1.
Our variable selection algorithm, , is illustrated in Figure <ref>.
The first set of variables is is chosen simply to maximize the projection onto the subspace spanned by columns of Y, ℒ_Y. This is illustrated with x_1, which is projected with P_Y as P_Yx_1.
The second set of features chosen, x_2 in the figure, is projected into the intersection of ℒ_Y, and the orthogonal complement of the chosen feature x_1, ℒ_x_1^⊥. At this step, the correlation is measured with the projection operator P_ℒ_Y∩ℒ_x_1^⊥.
Interestingly, it turns out that this projected feature, P_Y∩x_1^⊥x_2, lies also in the intersection of ℒ_Y and ℒ_(P_Y x_1)^⊥.
This observation paves the way for building our efficient, recursive algorithm for the feature selection with projection operators.
The pseudo-code of the basic form of our proposed variable selection by projection, , algorithm is displayed in Figure <ref>. The approach is fully deterministic without randomness, and thus practical to apply.
Similarly to CCA, our variable selection algorithm in a sense joins the variable spaces of the inputs and outputs – both of them are considered in the same space. At the same time, in order to our selection approach to work, ℒ_X should not be fully orthogonal ℒ_Y.
Additionally, due to the properties of the projection operators, our approach promotes invariance: the selected explanatory variables (input features) depend only on the subspace spanned by the response variables (output features), and are independent on any transformation on the response variables that would span the same subspace. These transformations can be singular or even nonlinear, as long as they are automorphisms of the output space.
In this basic form the algorithm is is scalable to medium-scale data, as it is limited memory required to store the projection matrix.
In the following sections we present techniques that allow scaling to very large datasets, e.g. m>1000000 and m ≫ n_x,n_y.
A recursive representation of the projection operators (see Section
<ref>), and especially the singular vector
based form, (eq. (<ref>)), significantly reduces
the demand for resources, both for memory and for computation time.
§ PROJECTION OPERATORS
This section first introduces relevant background on projection operators and their algebra. Then, two key points for our algorithm are discussed: the matrix representation of the projectors, and how the projection into the intersection can be expressed.
§.§ Projection operators, projectors
We now briefly introduce the mathematical framework describing the
projection operators of a Hilbert space. The proofs of the statements mentioned, as well as further details, are presented for example by <cit.>.
Let T be a linear operator T: ℋ→ℋ. Its adjoint T^*: ℋ→ℋ is defined by ⟨y,T^*x|=⟩⟨Ty,x|$⟩ for allx,y∈ℋ. A linear operatorTis
self-adjoint, or Hermitian ifT=T^*, unitary ifT^*=T^-1and normal ifTT^* =T^*T.
On the set of self-adjoint operators ofℋone can define a
partial order≼by
T_1 ≼T_2 ⇔⟨T_1x,x|≤⟩⟨T_2x,x|,⟩
for allx∈ℋ.
An operatorTis positive if
0≼T⇔ 0 ≤⟨T,x|,⟩
for allx∈ℋ. As a consequence we have
T_1 ≼T_2 ⇔0≼T_2-T_1.
Letℒbe a subspace ofℋ, the orthogonal
complement ofℒis given byℒ^⊥={ x| x ⊥z, ∀z
∈ℒ, x ∈ℋ } .
For any subspace ℒ⊆ℋ, ℋ=
ℒ⊕ℒ^⊥.
A linear operatorPis a projection operator ifP:
ℋ →ℒfor a subspaceℒofℋ. To highlight the connection between the subspace and
projection, they can be also denoted asℒ_PandP_L.
An operatorPis idempotent if
PPx=Px, or PP=P
holds for anyx∈ℋ.
The projection operators can be characterized by the following statements.
A linear operator P: ℋ→ℋ is a
projection if it is self adjoint, P=P^*, and
idempotent PP = P.
The map connecting the set of closed subspaces[In a finite dimensional Hilbert space all subspaces are closed.] of ℋ and the set of
the corresponding orthogonal projections is bijective.
As a consequence of the idempotent and self-adjoint
properties we have that the rangeℛ(P)and the null
space𝒩(P)ofPare
orthogonal, namely for anyx,y ∈ℋ⟨Px,y-Py|=⟩⟨P^2x,y-Py|=⟩⟨Px,(P-P^2)y|=⟩0.
The following theorems describe some algebraic properties of projection operators we are going to
exploit.
(Product of projections)
Let P_1 and P_2 be projections on
ℋ. P=P_1P_2 is projection if and only
if P_1P_2= P_2P_1. Then P:ℋ→ℒ_P_1∩ℒ_P_2.
(Sum of projections)
Let P_1 and P_2 be projections on
ℋ. P=P_1+P_2 is projection if and only
if ℒ_P_1⊥ℒ_P_2. Then P:ℋ→ℒ_P_1⊕ℒ_P_2.
(Partial order)
Let P_1 and P_2 be projections on
ℋ, and 𝒩(P_1) and
𝒩(P_2) the corresponding null spaces. Then the
following statements are equivalent.
[ P_1P_2 = P_2P_1 =P_1,; ℒ_P_1 ⊆ℒ_P_2,; 𝒩(P_1) ⊇𝒩(P_2),; ||P_1x|| ≤ ||P_2x||,; P_1 ≼P_2.; ]
(Difference of projections)
Let P_1 and P_2 be projections on
ℋ. P=P_2-P_1 is projection if and only
ℒ_P_1⊆ℒ_P_2. Then P:ℋ→ℒ_P, where ℒ_P_2 =
ℒ_P_1⊕ℒ_P, namely ℒ_P is
the orthogonal complement of ℒ_P_1 in ℒ_P_2.
From the theorems above we can derive a simple corollary: ifℒis a subspace, then the projection into its complement is
equal toP_ℒ^⊥ = I - P_ℒ.
(Monotone increasing sequence)
Let (P_n) be monotone increasing sequence of projections defined
on ℋ. Then:
* (P_n) is strongly operator convergent, and the limit
P, P_n →P, is a projection.
* P: ℋ→∪_n=1^∞ℒ_P_n.
* 𝒩(P) = ∩_n=1^∞𝒩(P_n).
IfSis a self-adjoint operator andPis a projection
into the range ofSthenSP=PS, see
<cit.> for further details.
LetIbe projection into the entire space, and0its
complement. If0≤S ≤I, andT≥0operators. IfPis a projection intoS+TthenPcommutes bothSandT. See in
<cit.>.
§.§ Matrix representation of projectors
If a basis of the Hilbert spaceℋis fixed, then every
linear operator acting onℋcan be
represented by a matrix. Let the subspaceℒofℋbe spanned by the vectorsa_1,…,a_kofℋ. Let us construct a matrixAwhose columns are equal to
the vectorsa_1,…,a_k. Here the
linear independence of those vectors is not assumed.
The corresponding subspace is denoted byℒ_AThe matrix representing the orthogonal projection
operator into to subspaceℒ_Acan be expressed by a
well-known minimization problem <cit.>,
min_wx-Aw^2 =
w^* = (A^TA)^+A^Tx,
where^+denotes the Moore-Penrose pseudo-inverse.
Based on eq. (<ref>) the vectorA^Tw^*is the orthogonal projection ofxintoℒ. The
orthogonal projection ofxis equal to
P_Ax=A(A^TA)^+A^Tx.
Since this is true for anyx∈ℋ, the matrix
representation of the orthogonal projection
operatorP_Ais given by
P_A=A(A^TA)^+A^T.
This formula can be simplified by exploiting the properties of the
Moore-Penrose pseudo-inverse, see for example <cit.>, via the singular value decompositionU_AS_AV_A^Tof the matrixA. Here we assume that the matrixA ∈ℝ^m ×n_A,m>n_A, andV_Ais a square matrix, butU_Acontains only those left singular vectors where the corresponding singular values are not equal to zero. We have
P_A =A(A^TA)^+A^T =
AA^+ =
U_AS_AV_A^TV_AS_A^+U_A^T
= U_AU_A^T.
This representation of the projection operator plays a central role in
our variable selection algorithm.
The following proposition ensures that the projection operator does not depend on its
representation.
Assume that two different matrices A and B span the
same subspace ℒ of dimension k. Then the two representations
P_A=U_AU_A^T and
P_B=U_BU_B^T yield the same projection
operator.
Since the columns of U_B as linear combinations of B
are in the ℒ, thus
P_AU_B = U_B. Multiplying both sides with
U_B^T we obtain that
P_AU_BU_B^T = U_BU_B^T
which is P_AP_B=P_B. Because the right hand
side, P_B,
is a projection, the left hand side P_AP_B is also
one. Following the same line we have
P_BP_A=P_A as well.
From Theorem <ref> we know that if the
product of projections is a projection, then the product of projections
is commutative,
P_BP_A=P_AP_B. Finally we can
conclude that
P_A= P_BP_A=P_AP_B = P_B.
We also exploited that ifℋis finite dimensional and
the corresponding field isℝthen the adjoint ofP^*is represented by the transposeP^Tof the
matrixP.
§.§.§ Projection onto the intersection of subspaces -
General view
Our algorithm hinges on the orthogonal projector of the intersection of a
set of subspaces{ℒ_1,ℒ_2,…,ℒ_n_L}. To
introduce this concept, here we mainly follow the line presented by
<cit.>. We can start with some classical result, first we can recall <cit.>, who derived a solution in case of two
subspaces as a limit:
P_ℒ_1,∩ℒ_2=lim_n→∞ (P_ℒ_1P_ℒ_2)^n.
That result has been extended to arbitrary finite sets of subspaces by
<cit.>,:
P_ℒ_1,∩…∩ℒ_n_L =
lim_n→∞ (P_ℒ_1…P_L_n)^n.
<cit.>, gave an explicit formula
for the case of two subspaces by
P_ℒ_1,∩ℒ_2=2 P_ℒ_1
(P_ℒ_1 + P_ℒ_2)^†P_ℒ_2.
<cit.>, provides an alternative
to computeP_ℒ_1,∩…∩ℒ_n_LHere we rely on the Lemma 4.1 and Corollary 4.2 of his work:
For i=1,…,n_L, let ℒ_i be subspaces of ℋ,
P_i be the corresponding projectors,
P^⊥_i=I-P_i, and λ_i>0. Define
Q:=∑_i=1^n_Lλ_i P_i^⊥.
then we have
P_ℒ_1,∩…∩ℒ_n_L=I-Q^†Q.
With the particular choice ∑_i=1^n_Lλ_i=1, Q
might be written as
Q:=I-∑_i=1^n_Lλ_i P_i,
eliminating all the complements of the projectors.
By exploting that for any projectorPP^⊥=I-P, theQ_tcorrsponding toP_ℒ_V ∩ℒ_X̃_t^⊥can
be written as
Q_t =λ_V (I-P_ℒ_V) + ∑_x∈X̃_tλ_x P_L_{x}.
The critical point is the computation of the Moore-Penrose inverse ofQ.
§.§ Expressing the projector into intersection
To implement the proposed variable selection algorithm (Figure
<ref>) the projection into the intersection of an
arbitrary subspaceℒ_Pand the complement of an
arbitrary vectorx,P_ℒ ∩x^⊥, has to be computed. The projectorP_ℒ^⊥to the complement of a subspaceℒcan be expressed asI - P_ℒ, hence
the projector intoP_x^⊥is given byI -
xx^T||x||^2. Sinceℒis
arbitrary we usePinstead ofP_ℒfor sake
of simplicity.
While we have these two projectors, their product, according to Theorem <ref>, is not a projection as it does not commute:
P( I -
xx^T||x||^2) = P -
Pxx^T||x||^2( I - xx^T||x||^2) P
=P - xx^TP||x||^2,
because in the general casePxx^T xx^TP. To overcome this problem we can
recognize that the intersectionℒ_P ∩ℒ_x^⊥can be expressed after a simple
transformation.
Let P be a projector and x be any vector,
then the intersections ℒ_P∩ℒ_x^⊥ and ℒ_P∩ℒ_(Px)^⊥ are the same subspaces of
ℒ_P.
Any vector u is in ℒ_P if Pu
= u, u is in ℒ_x^⊥ if
⟨x,u|=⟩0, and u is in
ℒ_(Px)^⊥ if
⟨Px,u|=⟩0. Since Pu
= u, therefore ⟨x,u|=⟩⟨Px,u|=⟩0.
By projectingxintoℒfirst, and then computing the corresponding
intersection, we can compute the projector intoℒ_P ∩ℒ_x^⊥in a simple way.
Let x=1 and α=Px^2=x^TPPx=x^TPx. If α>0 then
P_ℒ_P∩ℒ_x^⊥
= P_ℒ_P∩ℒ_(Px)^⊥
= P(I- 1αPxx^TP)
= P-1αPxx^TP.
When α=0, which means x orthogonal to
ℒ_P, then we have
P_ℒ_P∩ℒ_x^⊥
= P.
We can check thatP-1αPxx^TPis a real
projector. It is idempotent, since
[ (P-1αPxx^TP)^2 =P
-1αPxx^TP
-1αPxx^TP
+αα^2Pxx^TP = P-1αPxx^TP. ]
This agrees with Theorem <ref> which
states that the product of projections is a projection, idempotent and
adjoint, and it is map into the intersection of the corresponding
subspaces.
Furthermore the orthogonality ofxand the projection of anyu∈ℋbyP-1αPxx^TPcan also be verified, as
[ ⟨x,(P-1αPxx^TP)u⟩
=x^TPu
-1αx^TPxx^TPu
=x^TPu-1ααx^TPu
=0. ]
(<cit.>)
Two subspaces ℒ_1 and ℒ_2 are orthogonal if
for any vectors, x_1∈ℒ_1 and x_2∈ℒ_2⟨x_1,x_2|=⟩0 holds.
Two subspaces ℒ_1 and ℒ_2 are geometrically
orthogonal if ℒ_1 = (ℒ_1 ∩ℒ_2)
⊕𝒞_1 and ℒ_2 = (ℒ_1 ∩ℒ_2)
⊕𝒞_2 and 𝒞_1 and 𝒞_2 are
orthogonal.
(<cit.>)
Two subspaces ℒ_1 and ℒ_2 are geometrically
orthogonal if and only if
P_ℒ_1P_ℒ_2 =
P_ℒ_2P_ℒ_1 and
ℒ_P_ℒ_1P_ℒ_2 =
ℒ_1∩ℒ_2.
The subspaces ℒ_P and
ℒ_(Px)^⊥ are geometrically
orthogonal.
It is a simple Corollary of Proposition <ref>.
§ SELECTING VARIABLES IN RKHS
In order to take into account non-linear correlations in the data, we propose a kernelized adaptation of the problem.
Kernel methods are a group of varied machine learning models, taking
advantage of a symmetric and positive semi-definite kernel function
comparing data samples (sets of features)k:𝒳×𝒳→ℝ. The usage of a kernel function
allows including non-linearity to the models implicitly via a feature
mapφ:𝒳→ℱ_k: a kernel evaluated
with two samples corresponds to an inner product in this so-called
feature space (more specifically reproducing kernel Hilbert space, RKHS):k(x,z) = ⟨φ(x),φ(z)⟩_ℱ_k.
For more thorough introduction to traditional kernel methods, we refer the reader e.g. to <cit.>.
We here propose to kernelize the variable representation.
We considerϕ: ℝ^m
→ℋ, whereℝ^mis the vector space containing all columns ofY∈ℝ^m×n_yandX∈ℝ^m×n_x, andℋis a RKHS. In essence, this corresponds to defining a kernel on the variable vectors,κ:ℝ^m×ℝ^m→ℝ– in fact, we assume that theϕis only given implicitly viaκ.
In mathematical sense, this
matrix can equally well be considered to be a kernel matrix, since distinction between the rows and columns is by convention only. Usually however the matrix built from inner products between the variables is referred to as covariance operator. The covariance operators are also extended to RKHS with various applications in machine learning tasks <cit.>. Contrary to our approach, there the feature map and kernel are defined on data space𝒳instead of variable spaceℝ^m.
We need to mention here also the Gaussian Process Regression, <cit.> where kernels are also used to cover the covariance matrix, thus connecting the variables via inner product.
We highlight that as the kernel is defined on variables, we can easily evaluateκ(𝐱_i, 𝐲_j).
We use the following shorthands for feature and kernel evaluations on the available training data:ϕ(Y) = [ϕ(y_1),…,ϕ(y_n_y)]withϕ(y_i), i∈[n_y]a column vector, andκ(Y, x) = [κ(y_1, x),…,κ(y_n_y,x)]^⊤a column vector of kernel evaluations (similarly forϕ(X)).
Note thatκ(Y, Y) = ϕ(Y)^⊤ϕ(Y)with this notation. We further denoteK_y = κ(Y, Y).
We assume thatϕ(x)=1.
§.§ Expressing the Projection operator in RKHS
Based on Section <ref>, Equation (<ref>), the projectionP_Yis represented with the left singular vectors ofP_Y,U_Y. This representation is also needed for the kernelized algorithm.
However calculating directly the singular value decomposition onϕ(Y),ϕ(Y)= U_YS_YV_Y^T,
might not be feasible if the dimensionality of the feature space is large.
Assuming thatℋis finite dimensional[For clarity, we restrict the discussion to finite dimensions and ℋ = ℝ^d with d<∞. We note that the approach is equally valid also with infinite dimensions.] with dimensiond,
we haveϕ(Y),U_Y∈ℝ^d×n_y, andS_Y, V_Y∈ℝ^n_y ×n_y.
Therefore we can write
S_Y =
[
[ D_Y; ∅ ]],
D_Y∈ℝ^n_y × n_y, ∅∈ [0]^m-n_y,n_y,
andD_Ydiagonal with nonnegative elements of singular
values, thusϕ(Y)= U_YD_YV_Y^T.
Again, this decomposition can not be computed directly, however we can go on the
following line of computation.
To express theU_Ywe can apply a similar approach to what is exploited
in the computation of the kernel principal component analysis <cit.>.
Recall that the kernel matrix on columns ofϕ(Y)isK_Y =
ϕ(Y)^Tϕ(Y). From the singular value decomposition
we can derive thatK_Y =
V_YS_Y^2V^T. This kernel has a
reasonably small size,n_y ×n_y, thus its eigenvalue
decomposition can be computed, which yieldsV_Yand the
squares of the singular values of the diagonal elements ofS_Y. By
combining these expressions we have
ϕ(Y)V_YS_Y = U_YS_Y^2 ⇒U_Y = ϕ(Y)V_YS_Y^+
with the help of the Moore-Penrose generalized inverse.
Our algorithm hinges on evaluating products between projectors and the variable vectors.
We can now write the products of theU_Y^Twith an
arbitrary vector represented inℋas
[ U_Y^Tϕ(x) =
S_Y^-1V_Y^Tϕ(Y)^Tϕ(x)
= S_Y^-1V_Y^Tκ(Y,x). ]
Thus the product can be expressed with the help of the kernel on the variables with complexityO(n_y^2)if theK_Y,V_YandS_Y^-1are precomputed.
§.§ The recursive selection procedure
To calculate the projection operator efficiently in each iteration we
can exploit the structure ofP_ℒ_Y ∩ℒ_X̃_t^⊥ϕ(x)introduced in Proposition
<ref>. To this end, we define
an intermediate operator, projection into the complement
subspace of vectorq∈ℝ^nas:
Q(q) = I - qq^T||q||^2.
SinceQ(q)is a projection, we haveQ(q)=Q(q)Q(q)andQ(q) = Q(q)^T.
It can also be seen that multiplyingQwith a matrixA∈ℝ^n×n,
Q(q)A = ( I -
qq^T||q||^2) A
= A -
q(q^TA)||q||^2,
has the complexity of onlyO(n^2)since only matrix-vector and outer product are needed.
We are also going to use the following recurrent matrix products for a fixedtŨ_t = U_Y∏_s=1^t-1Q(q_s)
= Ũ_t-1Q(q_t-1).
Now we can write up the sequence of projections corresponding to the
Algorithm (<ref>):
0.97
=1.2pt1.05[ Let U_0 = U_Y, ℐ_0=∅,; P_0 = P_ϕ(Y)
= U_0U_0^T,; q_1 = U_0^Tϕ(x_k_1*), k_1* =max_k∈
[n_x]∖ℐ_0 ||P_0ϕ(x_k)||^2,ℐ_1 =ℐ_0 ∪k_1^*,; P_1 = U_0U_0^T
- U_0q_1q_1^TU_0^T||q_1||^2 = U_0Q(q_1)U_0^T
= U_0Q(q_1)Q(q_1)U_0^T
= U_1U_1^T,; q_2 = U_1^Tϕ(x_k_2*), k_2* =max_k∈
[n_x]∖ℐ_1 ||P_1ϕ(x_k)||^2, ℐ_2 =ℐ_1 ∪k_2^*,; ⋮; P_t = U_t-1U_t-1^T
- U_t-1q_t
q_t^TU_t-1^T||q_t||^2
= U_t-1Q(q_t)U_t-1^T; = U_t-1Q(q_t)Q(q_t)
U_t-1^T = U_tU_t^T,; q_t+1 = U_t^Tϕ(x_k_(t+1)*), k_(t+1)* =max_k∈
[n_x]∖ℐ_t ||P_tϕ(x_k)||^2, ℐ_t+1 =ℐ_t∪k_(t+1)^*,; ⋮ ]
The sequence of projections above correctly
computes the projection operators of Algorithm
in Figure <ref>.
We apply induction on t to prove the statement. In case of t=1 we
have by Proposition <ref>, that
[ P_1 = P_0 -
P_0ϕ(x_k_1*)ϕ(x_k_1*)^TP_0||P_0ϕ(x_k_1*)||
= U_0U_0^T
- U_0U_0^Tϕ(x_k_1*)ϕ(x_k_1*)^TU_0U_0^T||U_0U_0^Tϕ(x_k_1*))||^2; = U_0( I -
q_1q_1^T||q_1||^2)U_0^T
= U_0Q(q_1)U_0^T =
U_0Q(q_1)Q(q_1)U_0^T
=U_1U_1^T.; ]
In transforming
||U_0U_0^Tϕ(x)_t_1*||^2
into ||q_1||^2 we exploited that
U_0U_0^T is a projection, hence it
is idempotent.
Let t>1 be arbitrary. Suppose that
[ P_t = U_t-1U_t-1^T
- U_t-1q_tq_t^TU_t-1^T||q_t||^2
= U_tU_t^T,; ]
holds true.
Now, computing the projector t+1 we obtain
[ P_t+1 = P_t
- P_tϕ(x_k_(t+1)*)ϕ(x_k_(t+1)*)^TP_t||P_tϕ(x_k_(t+1)*)||^2; = U_tU_t^T
- U_tU_t^Tϕ(x_k_(t+1)*)
ϕ(x_k_(t+1)*)^TU_tU_t^T||U_tU_t^Tϕ(x_k_(t+1)*)||^2 = U_t(I -
q_t+1q_t+1||q_t+1||^2) U_t^T; = U_t Q(q_t+1) U_t^T = U_t Q(q_t+1) Q(q_t+1)U_t^T
= U_t+1U_t+1^T. ]
In the norm we again applied that
U_tU_t^T is idempotent.
We can express the main computation step, Step 2.b in
Algorithm <ref>, by exploiting the kernelized
recursive iteration. From the sequential procedure we can see that a
key step of the computation is the calculation of the vectorsq_ivia Equation (<ref>),U_Y^Tϕ(x)=S_Y^-1V_Y^Tκ(Y,x)for an arbitraryϕ(x)∈ℋ. In iterationt, we have
[ q_t+1 = U_t^Tϕ(x) = (
U_Y∏_s=1^t-1Q(q_s) )^Tϕ(x)
= Q(q_t)·…·Q(q_1)U_Y^Tϕ(x)_S_Y^-1V_Y^Tκ(Y,x). ]
Taking advantage of the recursive definition ofU_t^Tϕ(x)we also have that
[ U_t+1^Tϕ(x) = Q(q_t+1)U_t^Tϕ(x)
= (I -
q_t+1q_t+1^T||q_t+1||^2)
U_t^Tϕ(x), ]
whereq_t+1 =
U_t^Tϕ(x_k_(t+1)*), thus all terms
relate to those computed in the previous iteration.
The computation of the norm||q_t+1||^2can also
exploit the recursive nature of the algorithm.
Finally, all the feature representationsϕ(x)andϕ(Y)are implicit, and are only expressed via kernel
evaluations since they only appear in inner products.
Based on these statements and Proposition
<ref> we can present a concrete
implementation of our algorithm in Figure
<ref>.
In the first step the kernels are computed, whereK_YXrequiresO(mn_yn_x), andK_YO(mn_y^2)operations in case of for example linear and Gaussian kernels. For the eigenvalue
decomposition ofK_Ywe needO(n_y^3)operations, whereD≤min(n_y,n_x). In the algorithm,
the critical step is Step 4.a. Its complexity in steptisO(n_y(n_x-t)), thus, in general for
selectingDvariables we needO(n_yn_xD)operations.
Assuming thatm≫n_y,n_x, the dominating part is the computation of the kernels,
thus the entire complexity is equal toO(mn_ymax(n_x,n_y)).
§ EXPERIMENTS
In this section we experimentally validate our approach.[The code for the algorithm is available <https://github.com/aalto-ics-kepaco/ProjSe>.] [The experiments are run on a machine with this parameters: 12th Gen Intel Core^TM i5-12600K * 10.]
We first show demonstrations on our algorithm's scalability on synthetic data, before moving on to experimenting with real data and analysing the stability of the feature selection.
§.§ Scalability demonstration with synthetic data
This test is implemented via a
scheme presented by (<ref>) in
Figure <ref> and by (<ref>). The components of the
input matrix,Xand the components of a transformation matrixWare independently sampled from normal distribution. Then
output matrix is constructed, and finally random noise is added to the output.
[ Input Linear transformation Noise; W∼ [𝒩(0,σ)]^n_x× n_y E∼ [𝒩(0,σ)]^m × n_y; ⇓ ⇓; X∼ [𝒩(0,σ)]^m× n_x ⟹ Y = XW ⟹ Ỹ = Y+E. ]
We apply to this data with various sample sizes. Figure <ref> presents the
dependence of the selection time on the sample size, where the maximum sample size is10million and the number of variables is10– the variable selection is performed in less than four seconds.
§.§ Feature selection from biological data
In this set of experiments, we compare our approach to <cit.>– a kernel-based feature selection method, where kernels are considered traditionally on data samples instead of features.
We experiment with the two gene expression datasets, "Carcinoma", "Glioma", considered there for unsupervised feature selection.
While this kind of setting with one-view fat data is not the one our method was developed for, as the scalability we propose is first and foremost for large sample sizes, these experiments still serve for illustrative comparison of feature selection performance.
As the data is only available in one view in unsupervised setting, we apply our algorithm by using the data in both views: as the reference/label view and as the view the feature selection is performed on. Intuitively, this would filter out the noise and redundant features in the view.
In our method we consider linear kernel,k(x,z) = x^Tz, polynomial kernel of degree 3,k(x,z) = (x^Tz)^3, and RBF kernel,k(x,z) = exp(x-z^2/(2σ^2))with the kernel parameterσset as mean of pairwise distances.
We assess the performance of the feature selection by measuring the normalised mutual information (NMI) of k-means clustering results. Here the clusterer has been given the amount of classes in the data as the number of clusters.
The results are displayed in Table <ref>, with comparison to selected methods from <cit.>: UKFS proposed there, as well as a scoring-based method "lapl" <cit.> that performed well on Glioma dataset, and NDFS <cit.>, a clustering-based approach that performed well with Carcinoma dataset. Our method is very competitive with these, sometimes achieving better performance. As in our method the kernel is calculated on features, the running time is slower than for UKFS where the kernel on samples is considered. However notably we are still competitive when compared to the NDFS.
Additionally, Figure <ref> displays more detailed clustering results with respect to the number of variables chosen by .
These results also highlight the differences that can be obtained by applying different kernels on the features: with Carcinoma dataset the non-linear kernels, RBF and polynomial kernel of degree 3, are clearly superior, while with Glioma linearity works the best.
§.§ Feature selection from vector-valued output setting
We next consider a setting more aligned with our method: supervised feature selection with vector-valued output as the reference view.
Here we consider three datasets from the UEA & UCR Time Series Classification Repository[<http://www.timeseriesclassification.com/dataset.php>], Crop, NonInvasiveFetalECGThorax1 ("Thorax"), and ShapesAll, as detailed in Table <ref>. These datasets are associated with multi-class classification tasks, and we use the one-hot encoding of the labels as the vector-valued output to perform the feature selection with .
As before, we consider linear, polynomial and RBF kernels. We assess the success of the feature selection task by performing classification with SVM with the selected features - here we consider both linear and RBF kernels on the data samples.
The results are displayed in Figure <ref>, where both kernel alignment (KA(K,K') = ⟨K_c, K'_c⟩_F /(K_c_FK'_c_F)wherecdenotes centering) to the linear kernel on the one-hot-encoded outputs, and accuracy of SVM classification are shown. The different kernels used in feature selection give slightly different features; however the performance on the subsequent classification task is mostly dependent on which kernel is used on the samples. Especially for Thorax and ShapesAll datasets with higher dimensionality, it can be seen that all the results with linear SVM outperform using the full set of features.
§.§ Experiments with two-view data
In our last set of experiments, we consider the following datasets:
* MNIST handwritten digits<cit.>: This dataset contains 60000 training and
10000 test examples of handwritten digits in greyscale.
The number of pixels in each image is 28 × 28 = 784, resulting in total to 784 variables. To construct the two
sets of variables, the image columns are split into half similarly
as in <cit.>. Thus both views comprise of
392 variables.
* MediaMill dataset<cit.>: This
dataset contains 43907 examples which are extracted from keyframes
of video shots.
There are two views in this data: text annotations (101 variables) and visual features (120 variables).
* Cifar100 dataset<cit.>: This
dataset, chosen to demonstrate the scalability of , contains 50000 training and
10000 test examples of color images.
The number of pixels in each image is 32 × 32 = 1024, where
to each pixel 3 colors are assigned. The examples belong to 100
classes, where each class contains 500 training and 100 test
examples. The classes are represted by indicator vectors.
We perform variable selection independently on both views in the datasets. After the variable selection is performed on both sides, we compute canonical correlations between all subset pairs of the extracted variables, starting from the first ones and incrementally growing them to the entire sets.
To demonstrate the performance of the proposed variable selection algorithm , it
is compared to the following methods: large-scale sparse kernel
canonical correlation analysis (GradKCCA), <cit.>,
deep canonical correlation analysis (DCCA), <cit.>,
randomized non-linear CCA (RCCA), <cit.>,
kernel non-linear orthogonal iterations (KNOI), <cit.>, and
CCA through Hilbert-Schmidt independence criterion (SCCA-HSIC), <cit.>.
These CCA variants explicitly or implicitly rely on
singular value decomposition, and their performance highly depends on the
distribution of the singular values of the data matrices.
Since the data matrices have small number of dominating singular values,
we expect from a variable selection method that it can capture a
relatively small set of variables to reproduce similar accuracy,
measured in canonical correlation.
We need to bear in mind that the CCA-based methods add up all information represented by the full
collection of variables, however the projector based selection only
relies a relatively small subset of those variables.
Figure <ref> shows the performance of on MNIST and MediaMill datasets with both linear and Gaussian kernels, as functions of the number of selected variables. The results are measured by canonical correlation between the subsets of variables selected from the two views. Comparing the results to the other CCA methods in Table <ref> (taken from <cit.>), we observe obtaining comparable performance after 20 or 50 selected variables, while being orders of magnitude faster than the other methods.
Since is fully deterministic, there is no variance is reported for it. The heaviest computation for is in the beginning when eigenvalue decomposition ofK_YYis calculated (see Table <ref>). Thus the running time varies only minimally when different number of variables is selected.
This is also demonstrated in <ref> where the running times for MNIST and Cifar100 datasets are detailed.
As a variable selection method, we are interested in evaluating the stability of the selection process.
In order to measure this, we here consider the stability index <cit.>
and the average Pearson correlation of relevance <cit.>. In both measures, higher values indicate higher stability; the maximum value in both is 1.
First the number of extracted
variables is chosen from(1,2,…,5)in MNIST, and from(10,20,…,50)in MediaMill. For each number of
selected variables the subsamples are taken with the following
percentage of the entire training sets:(10%,20%,…,50%).
Then random subsets are extracted10times of the size given above.
The scores are computed for each number of selected
variables, and for each subsample size and finally averaged on all
random samples.
They are shown in
Figure <ref>, where the averages
for all pairs of subsets of variables and for all subsample sizes are presented.
§ CONCLUSION
In this paper we introduced a novel variable selection method for two-view settings. Our method is deterministic, selecting variables based on correlation defined with projection operators. The kernelised formulation of our approach paves way for efficient and highly scalable implementation, allowing the application of our method to datasets with millions of data samples. We empirically demonstrated this efficiency and the suitability of our approach for feature selection task, with both synthetic and real data.
§ DECLARATIONS
The authors wish to acknowledge the financial support by Academy of Finland through the grants 334790 (MAGITICS), 339421 (MASF) and 345802 (AIB), as well as the Global Programme by Finnish Ministry of Education and Culture
apalike
|
http://arxiv.org/abs/2307.00829v1
|
20230703081628
|
Deconvolutional determination of the nonlinearity in a semilinear wave equation
|
[
"Nicholas Hu",
"Rowan Killip",
"Monica Visan"
] |
math.AP
|
[
"math.AP"
] |
Deconvolutional determination of the nonlinearity]Deconvolutional determination of
the nonlinearity in a semilinear wave equation
N. Hu]Nicholas Hu
Department of Mathematics, UCLA
[email protected]
R. Killip]Rowan Killip
Department of Mathematics, UCLA
[email protected]
M. Vișan]Monica Vișan
Department of Mathematics, UCLA
[email protected]
[
[
=====
We demonstrate that in three space dimensions, the scattering behaviour of semilinear
wave equations with quintic-type nonlinearities uniquely determines the nonlinearity.
The nonlinearity is permitted to depend on both space and time.
§ INTRODUCTION
We consider the semilinear wave equation
(∂_tt - _x) u(t, x) = F(t, x, u(t, x)), (t, x) ∈×^3;
u(0,·) = u_0;
∂_t u(0,·) = u_1.
Under mild assumptions on the nonlinearity F : ×^3 ×→, we show
that this equation admits a small-data scattering theory and that the scattering operator
determines the nonlinearity. The specific class of nonlinearities we consider is
given in Definition <ref> and may be regarded as a generalization of the
energy-critical case. The main inspiration for the problem we study is the paper
<cit.> of Sá Barreto, Uhlmann, and Wang. Our methods, however, are more
strongly influenced by Killip, Murphy, and Vișan <cit.>.
The requirements that we impose on the nonlinearity are as follows:
A measurable function F : ×^3 ×→ will be called
admissible for <ref> if
* F(t, x, 0) = 0 for all t, x;
* F(t, x, u) - F(t, x, v) (u^4 + v^4) u - v for all
u, v uniformly in t, x; and
* F(t, x, -u) = -F(t, x, u) for all t,
x.
If F(t,x,u)=± |u|^4u, the resulting equation is the defocusing/focusing (depending on
the sign of the nonlinearity) energy-critical wave equation. This name reflects the fact
that in this case, the equation enjoys a scaling symmetry
u(t,x)↦ u^λ(t,x) =
λ ^1/2 u(λ t, λ x) for λ>0
that
preserves the energy of solutions
E(u)= ∫_^312 |∇ u(t,x)|^2 +12
|∂_t u(t,x)|^2 ± 16|u(t,x)|^6 dx.
Accordingly, we will be studying
equation (<ref>) with initial data (u_0,u_1) in the energy space Ḣ^1(^3)×
L^2(^3).
A function u : ×^3 → is said to be a strong
global solution of <ref> if (u, ∂_t u) ∈ C^0_t Ḣ^1_x (×^3) × C^0_t L^2_x (×^3), u ∈ L^5_t L^10_x (K ×^3) for
all compact sets K ⊆, and u satisfies the Duhamel formula
[ u(t); ∂_t u(t) ] =
(t) [ u_0; u_1 ] +
∫_0^t (t-s) [ 0; F(s) ] ds.
Here denotes the propagator for the linear wave equation, that is,
(t) [ cos(t) sin(t); -sin(t) cos(t) ].
Here and in what follows we abbreviate u(t, ·) as u(t) and F(t, ·,
u(t)) as F(t).
For admissible nonlinearities, equation (<ref>) admits a small-data global
well-posedness and scattering theory.
Let F be an admissible nonlinearity for <ref>.
Then there exists an η > 0 such that <ref> has a unique global solution u
satisfying
(u, ∂_t u)_L^∞_t Ḣ^1_x × L^∞_t L^2_x +
u_L^5_t L^10_x(u_0, u_1)_Ḣ^1 × L^2
whenever (u_0, u_1) ∈ B_η, where
B_η(u_0, u_1) ∈Ḣ^1(^3) × L^2(^3) :
(u_0, u_1)_Ḣ^1 × L^2 < η.
This solution scatters in Ḣ^1(^3) × L^2(^3) as t →±∞,
meaning that there exist
necessarily unique asymptotic states
(u^±_0, u^±_1) ∈Ḣ^1(^3) × L^2(^3) for which
[ u(t); ∂_t u(t) ] -
(t) [ u^±_0; u^±_1 ]_Ḣ^1 ×
L^2→ 0
as t →±∞.
In addition, for all (u^-_0, u^-_1) ∈ B_η, there exists a unique global
solution u to <ref> and a unique asymptotic state (u^+_0, u^+_1) ∈Ḣ^1(^3) × L^2(^3) so that both limits in scattering hold.
The map (u_0, u_1) ↦ (u^+_0, u^+_1) defined implicitly by <Ref> on the
open ball B_η⊆Ḣ^1(^3) × L^2(^3) is the inverse of what is
often called the forward wave operator; in this paper, we will refer to it simply as
the wave operator and we will denote it by F. The map (u^-_0, u^-_1)
↦ (u^+_0, u^+_1) is the scattering operator and will be denoted F.
Our principal result is that either operator determines the nonlinearity completely.
Our hypotheses on the nonlinearity F do not demand any continuity in t or x. Avoiding
such a restriction is important for us as we wish to allow nonlinearities of the form
1_Ω (x) u^5, which model a nonlinear medium (whose shape we wish to determine)
surrounded by vacuum.
Without a continuity requirement, complete determination of the nonlinearity means
determination at (Lebesgue) almost every spacetime point. We can be very precise about the
spacetime points at which we determine the nonlinearity:
Suppose that F is an admissible nonlinearity for <ref>. A point (t, x) ∈×^3 will be called determinable for F if it is a Lebesgue point of
F(·, ·, u) for every rational u. The set of all such points will be
denoted F.
For each fixed u, the map (t,x)↦ F(t,x,u) is bounded and measurable and so almost
every point is a Lebesgue point. The countability of the rational numbers then guarantees
that almost every spacetime point is determinable.
Suppose that F and F̃ are admissible nonlinearities for <ref> and that
B_η and B_η̃ are corresponding balls given by <Ref>.
If F and F̃, or F and F̃, agree on B_η∩ B_η̃ that is, the smaller of the two balls, then F(t,
x, ·) = F̃(t, x, ·) for all (t, x) ∈F∩F̃.
The question of whether the nonlinearity in a dispersive PDE is determined by its scattering
behaviour has been extensively studied <cit.>. Usually, rather strong assumptions are imposed on the nonlinearity
in order to obtain a positive answer.
In contrast, Killip, Murphy, and Vișan's deconvolution-based approach <cit.> enabled
them to determine power-type nonlinearities in a semilinear Schrödinger equation with only
moderate growth restrictions on the nonlinearities. Their approach is flexible and
technically simple, as demonstrated by its subsequent application to the determination of
coefficients <cit.> and inhomogeneities <cit.> of nonlinear
Schrödinger equations.
In this paper, we revisit the setting considered by Sá Barreto, Uhlmann, and Wang
<cit.>, who determined nonlinearities of the form F = F(u) in
<ref> under the following assumptions:
* F(u) = h(u) u for some even function h satisfying h(u)≈u^4 for all u;
* F'(u) u ∼ F(u) as u → 0 and as u →±∞;
* u ↦∫_0^u F(v) dv is convex;
* F^(j)(u)u^5-j for each 0 ≤ j ≤ 5; and
* F^(4)(u) = 0 if and only if u = 0.
By adapting the deconvolution technique of <cit.> to the setting of the wave
equation, we will prove that even more general nonlinearities of the form F = F(t, x, u)
can be determined under the weaker conditions of <Ref>.
Let us now turn to an overview of the paper, the method of <cit.>, and the principal
challenges to be overcome in applying it in the wave equation setting.
Our first task is to establish the existence, uniqueness, and long-time behaviour of
solutions to (<ref>) for small initial data and for admissible nonlinearities. This is
Theorem <ref>, which we prove in Section <ref>.
Following <cit.>, our approach to identifying the nonlinearity is through the
small-data asymptotics of the scattering and wave operators. These are presented in
Corollary <ref>, which gives a precise estimate on the difference between the full
operators and what is known as their Born approximation.
Under the Born approximation, the scattering/wave operators capture the spacetime integral
of u(t,x)F(t,x,u(t,x)), where u(t,x) is a solution of the linear wave equation.
This evidently represents a substantial `blurring' of the nonlinearity across different
values of t, x, and u. If the nonlinearity did not depend on t and x, then this
blurring would take the form of a convolution (over the multiplication group). By switching
to exponential variables, this then would become a convolution in the traditional sense. In
this way, the question of identifying the nonlinearity reduces to a deconvolution problem.
As we will discuss in Section <ref>, the uniqueness criterion for such
deconvolution problems is the well-known L^1 Tauberian theorem of Wiener; see
Theorem <ref>.
To overcome the dependence of the nonlinearity on space and time, we will employ a solution
of the linear wave equation that concentrates tightly at a single point in spacetime (while
also remaining small in scaling-critical norms). As noted earlier, we do not assume that
the nonlinearity is continuous in t or x; consequently, there are some subtleties to be
overcome in localizing the nonlinearity to a single spacetime point. This is the role of
Lemma <ref>. With this hurdle overcome, the uniqueness question is reduced to
the deconvolution problem presented in Proposition <ref>.
We now arrive at the crux of the matter: we need to find solutions to the linear wave
equation that lead to a deconvolution problem that can actually be solved. Concretely, we
must find a linear solution whose distribution function we can compute sufficiently
explicitly that we will be able to verify the hypotheses of Wiener's Tauberian theorem. The
distribution function for the solution we choose is computed in Lemma <ref>.
Although we are unable to compute the resulting Fourier transform precisely, we are
nonetheless able to verify that it is nonvanishing (see Proposition <ref>) and
consequently to apply the Tauberian theorem.
§.§ Acknowledgements
R.K. was supported by NSF grant DMS-2154022; M.V. was
supported by NSF grant DMS-2054194.
§.§ Notation
Throughout this paper, we employ the standard notation A B to indicate that A ≤
CB for some constant C > 0; if A B and B A, we write A ≈ B.
Occasionally, we adjoin subscripts to this notation to indicate dependence of the constant
C on other parameters; for instance, we write A _α, β B when A ≤
CB for some constant C > 0 depending on α, β.
§ SMALL-DATA SCATTERING
We begin by establishing the small-data scattering theory described in <Ref>.
This relies on a standard contraction mapping argument using Strichartz estimates.
If u : ×^3 → is a global solution of <ref>, then
(u, ∂_t u)_L^∞_t Ḣ^1_x × L^∞_t L^2_x +
u_L^5_t L^10_x(u_0, u_1)_Ḣ^1 × L^2 + F(t, x, u(t, x))_L^1_t L^2_x.
The contraction mapping argument constructs the solution from the Duhamel formula
[ u(t); ∂_t u(t) ] =
(t) [ u_0; u_1 ] +
∫_0^t (t-s) [ 0; F(s) ] ds.
Similarly, the solution with prescribed asymptotic state (u_0^-, u_1^-) as t→ -∞
is constructed from the formula
[ u(t); ∂_t u(t) ] =
(t) [ u^-_0; u^-_1 ] +
∫_-∞^t (t-s) [ 0; F(s) ] ds.
Let
X {u : ×^3 → :
(u, ∂_t u) ∈ C^0_t Ḣ^1_x(×^3) × C^0_t L^2_x(×^3),
u ∈ L^5_t L^10_x(×^3),
(u, ∂_t u)_L^∞_t Ḣ^1_x × L^∞_t L^2_x +
u_L^5_t L^10_x≤
2C (u_0, u_1)_Ḣ^1 × L^2},
where C is the implicit constant in the Strichartz estimates. Equipping X with the
metric
d(u, v) (u, ∂_t u) - (v, ∂_t v)_L^∞_t Ḣ^1_x ×
L^∞_t L^2_x +u-v_L^5_t L^10_x ,
we obtain a nonempty complete metric space (X, d).
For u ∈ X, we then define
(Φ(u))(t) cos(t) u_0 +
sin(t)/ u_1 +
∫_0^t sin((t-s))/ F(s) ds
so that
[ (Φ(u))(t); (∂_t Φ(u))(t) ] =
(t) [ u_0; u_1 ] +
∫_0^t (t-s) [ 0; F(s) ] ds.
To construct the solution of <ref>, we will show that Φ is a contraction on
(X, d) whenever (u_0, u_1) ∈ B_η and η is sufficiently small. The
solution sought will then be the fixed point of Φ whose existence and uniqueness
are guaranteed by the Banach fixed point theorem.
We first verify that Φ maps X into itself. Let C_F be a constant such that
F(t, x, u)≤ C_F u^5 for all (t, x) ∈×^3. If u ∈
X, then by the Strichartz estimates, we have
(Φ(u), ∂_t Φ(u))_L^∞_t Ḣ^1_x × L^∞_t
L^2_x + Φ(u)_L^5_t L^10_x
≤ C((u_0, u_1)_Ḣ^1 × L^2 +
F(t, x, u(t, x))_L^1_t L^2_x)
≤ C((u_0, u_1)_Ḣ^1 × L^2 + C_F u_L^5_t
L^10_x^5)
≤ C[1 + C_F (2C η)^4 (2C)]
(u_0, u_1)_Ḣ^1 × L^2
≤ 2C (u_0, u_1)_Ḣ^1 × L^2,
provided that η is sufficiently small.
To show that (Φ(u))(t) and (∂_t Φ(u))(t) are also continuous in
t, fix a t_0 ∈ and consider, without loss of generality, the case when t ≥
t_0. The first term on the right-hand side of formula duhamel-phi
converges to (t_0)(u_0, u_1) in Ḣ^1 × L^2 as t → t_0 since
(t) is strongly continuous in t. As for the second term, we observe that
∫_0^t(-s)
[ 0; F(s) ] ds -
∫_0^t_0(-s)
[ 0; F(s) ] ds_Ḣ^1 × L^2
≤∫_t_0^tsin(-s)/ F(s) ds_Ḣ^1 +
∫_t_0^tcos(-s) F(s) ds_L^2
∫_t_0^tF(s)_L^2 ds
u_L^5_t L^10_x ([t_0, t] ×^3)^5 → 0 as
t→ t_0,
by the dominated convergence theorem. Consequently, the second term converges to (t_0) ∫_0^t_0(-s) (0, F(s)) ds in Ḣ^1 × L^2 as t →
t_0 since (t) is strongly continuous and uniformly bounded in t. Altogether,
this shows that Φ(u) ∈ X as required.
Now if u, v ∈ X, the Strichartz estimates also yield
d(Φ(u), Φ(v))
F(t, x, u(t, x)) - F(t, x, v(t, x))_L^1_t L^2_x
(u^4 + v^4) u-v_L^1_t L^2_x
(u_L^5_t L^10_x^4 + v_L^5_t L^10_x^4)
u-v_L^5_t L^10_x
[(2Cη)^4 + (2Cη)^4]
d(u, v),
which shows that Φ is a contraction for sufficiently small η.
Next, we prove that the solution u scatters in Ḣ^1 × L^2. As (t)
is unitary on Ḣ^1 × L^2, this amounts to showing that the
functions ^-1(t) (u(t), ∂_t u(t)) converge in Ḣ^1 × L^2 as
t →±∞. By time reversal symmetry, it suffices to consider t →
+∞. For t_2 ≥ t_1 ≥ T,
^-1(t_2)
[ u(t); ∂_t u(t) ] -
^-1(t_1)
[ u(t); ∂_t u(t) ]_Ḣ^1 × L^2
=
∫_0^t_2(-s)
[ 0; F(s) ] ds -
∫_0^t_1(-s)
[ 0; F(s) ] ds
_Ḣ^1 × L^2
u_L^5_t L^10_x ([t_1, t_2] ×^3)^5→ 0 as T→∞,
by the dominated convergence theorem. We conclude that ^-1(t)(u(t),
∂_t u(t)) is Cauchy in Ḣ^1 × L^2 as t →∞ and therefore
convergent.
This completes the construction of the wave operator. The construction of the scattering
operator, using duhamel-scattering in place of duhamel, is
entirely analogous.
We note that the foregoing argument shows that the wave operator is given by
F([ u_0; u_1 ]) =
[ u_0; u_1 ] +
∫_0^∞(-t) [ 0; F(t) ] dt,
where u is the solution of <ref> with initial data (u_0, u_1).
Similarly, the scattering operator is given byF([ u^-_0; u^-_1 ]) =
[ u^-_0; u^-_1 ] +
∫_-∞^∞(-t) [ 0; F(t) ] dt,
where u is the solution of <ref> that scatters to (u^-_0, u^-_1) as t →
-∞.
Suppose that F is an admissible nonlinearity for <ref> and that B_η is a
corresponding ball given by <Ref>.
If u_lin denotes the solution of the linear wave equation with initial data
(u_0, u_1) ∈ B_η, then in Ḣ^1 × L^2 we have
F([ u_0; u_1 ]) =
[ u_0; u_1 ] +
∫_0^∞(-t) [ 0; F_lin(t) ] dt +
[ u_0; u_1 ]_Ḣ^1 × L^2^9.
Similarly, given (u^-_0, u^-_1) ∈ B_η, let u be the solution of
<ref> that scatters to (u^-_0, u^-_1) as t → -∞.
If u_lin denotes the solution of the linear wave equation with
initial data (u_0, u_1) (u(0), ∂_t u(0)) ∈ B_η, thenF([ u_0^-; u_1^- ]) =
[ u_0^-; u_1^- ] +
∫_-∞^∞(-t)
[ 0; F_lin(t) ] dt +
[ u_0; u_1 ]_Ḣ^1 × L^2^9.
Here F_lin(t) is an abbreviation for F(t, ·, u_lin(t)).
We will derive the asymptotic expansion sop-asymp
from formula duhamel-sop for the scattering
operator; the derivation of wop-asymp
from formula duhamel-wop
for the wave operator is similar.
Comparing duhamel-sop with sop-asymp,
we see that the latter follows from
∫_-∞^∞(-t) [ 0; F(t) -
F_lin(t) ] dt_Ḣ^1 × L^2[ u_0; u_1 ]_Ḣ^1 × L^2^9,
which we will prove by duality.
To this end, fix some (v_0, v_1) ∈Ḣ^1 × L^2 and
let v_lin denote
the solution of the linear wave equation with initial data (v_0, v_1). Then
∫_-∞^∞(-t) [ 0; F(t) -
F_lin(t) ] dt[ v_0; v_1 ]_Ḣ^1 × L^2
=
∫_-∞^∞[ 0; F(t) -
F_lin(t) ] (t) [ v_0; v_1 ]_Ḣ^1 × L^2 dt
=
∫_-∞^∞[ 0; F(t) -
F_lin(t) ][ v_lin(t); ∂_t v_lin(t) ]_Ḣ^1 × L^2 dt
=
∫_-∞^∞F(t) -
F_lin(t)∂_t v_lin(t)_L^2 dt.
As a result, it will suffice to show that
∫_-∞^∞F(t) -
F_lin(t)∂_t v_lin(t)_L^2 dt[ u_0; u_1 ]_Ḣ^1 × L^2^9
[ v_0; v_1 ]_Ḣ^1 × L^2.
To estimate this integral, we first employ Hölder's inequality to deduce that
∫_-∞^∞F(t) - F_lin(t)∂_t v_lin(t)_L^2 dt
≤F(t, x, u(t, x)) - F(t, x, u_lin(t, x))_L^1_t L^2_x·∂_t v_lin_L^∞_t L^2_x
(u^4_L^5_t L^10_x + u_lin^4_L^5_t
L^10_x)
u - u_lin_L^5_t L^10_x·∂_t v_lin_L^∞_t L^2_x .
equation
By soln-estimate and the Strichartz estimates, we have
u_L^5_t L^10_x^4
(u_0, u_1)_Ḣ^1 × L^2^4 ,
u_lin_L^5_t L^10_x^4
(u_0, u_1)_Ḣ^1 × L^2^4 ,
u - u_lin_L^5_t L^10_x F(t, x, u(t, x))_L^1_t L^2_xu^5_L^5_t L^10_x(u_0, u_1)_Ḣ^1 × L^2^5 ,
∂_t v_lin_L^∞_t L^2_x (v_0, v_1)_Ḣ^1 × L^2 .
Inserting these estimates into sop-asymp-holder
yields sop-asymp-duality, completing the proof of the corollary.
§ REDUCTION TO A CONVOLUTION EQUATION
The next step is the reduction of the proof of <Ref> to the consideration of a
convolution equation. As in <cit.>, the central idea is to exploit the Born
approximation for well-chosen solutions of the linear wave equation. Indeed, the principal
obstacle to be overcome in implementing that strategy is to find solutions of the linear
wave equation with the key properties we need. Most fundamentally, we need solutions for
which we are not only able to compute the distribution function (i.e., the measure of
spacetime superlevel sets), but can also prove that the Fourier transform of a certain
function w connected with it does not vanish.
Our solutions will be built from the radially symmetric solution
u_lin(t, r) f(r-t) - f(r+t)/r
of the linear wave equation (∂_tt - _x) u(t, x) = 0 on ×^3,
where r x and f(s) max1-s, 0.
This solution arises from the initial data
u_0(x)
u_lin(0, x)
= 0 ∈Ḣ^1(^3),
u_1(x) ∂_t u_lin(0, x)
=
2/x if 0 < x≤ 1,
0 if x > 1∈ L^2(^3).
In addition, u_lin(t, x) ≥ 0 for t > 0 and u_lin(t, x) is odd in
t.
The next lemma gives a formula for the distribution function of u_lin. The
function w connected with this solution is presented in (<ref>). The nonvanishing
of the Fourier transform of w will be demonstrated in Proposition <ref>.
For λ > 0, let
m(λ)
(t, x) ∈ (0, ∞) ×^3 :
u_lin(t, x) > λ.
Then
m(λ)
= 4π/3(1/2λ^3 - 2/(λ+2)^3)
1_(0, 2)(λ).
For t, λ > 0, let
m(t; λ) x ∈^3 :
u_lin(t, x) > λ
so that
m(λ) = ∫_0^∞ m(t; λ) dt.
We will evaluate this integral by analyzing u_lin on the spacetime regions
0 < t < 1/2, 1/2 < t < 1, and t > 1.
On the region 0 < t < 1/2, we have
u_lin(t, r) =
2 if 0 < r < t,
2t/r if t ≤ r < 1-t,
1-r+t/r if 1-t ≤ r < 1+t,
0 otherwise.
Hence, for 0 < t < 1/2,
m(t; λ) =
4π/3·
(1+t/1+λ)^3 if 0 < λ < 2t/1-t,
(2t/λ)^3 if 2t/1-t≤λ < 2,
0 otherwise.
Therefore, the contribution of this region to the right-hand side of (<ref>) is
∫_0^1/2 m(t; λ) dt = ∫_0^1/24π/3(1+t/1+λ)^3
1_0 < λ < 2t/1-t(t) dt
+ ∫_0^1/24π/3(2t/λ)^3
1_2t/1-t≤λ < 2(t) dt
=
[
∫_λ/λ+2^1/24π/3(1+t/1+λ)^3 dt
]
1_(0, 2)(λ)
+
[
∫_0^λ/λ+24π/3(2t/λ)^3 dt
]
1_(0, 2)(λ)
=
4π/3(81/64(λ+1)^3 - 4(λ+1)/(λ+2)^4)
1_(0, 2)(λ) +
4π/3·2λ/(λ+2)^4
1_(0, 2)(λ)
= 4π/3(81/64(λ+1)^3 - 2/(λ+2)^3) 1_(0,
2)(λ).
On the region 1/2 < t < 1, we have
u_lin(t, r) =
2 if 0 < r < 1-t,
1+r-t/r if 1-t ≤ r < t,
1-r+t/r if t ≤ r < 1+t,
0 otherwise.
Hence, for 1/2 < t < 1,
m(t; λ) =
4π/3·
(1+t/1+λ)^3 if 0 < λ < 1/t,
(1-t/λ-1)^3 if 1/t≤λ < 2,
0 otherwise.
Therefore, the contribution of this region to the right-hand side of (<ref>) is
∫_1/2^1 m(t; λ) dt = ∫_1/2^1 4π/3(1+t/1+λ)^3 1_0 < λ < 1/t(t)
dt
+ ∫_1/2^1 4π/3(1-t/λ-1)^3
1_1/t≤λ < 2(t) dt
= [ ∫_1/2^1 4π/3(1+t/1+λ)^3 dt ] 1_(0, 1](λ)
+ [ ∫_1/2^1/λ4π/3(1+t/1+λ)^3 dt ] 1_(1, 2)(λ)
+ [ ∫_1/λ^1 4π/3(1-t/λ-1)^3 dt ] 1_(1, 2)(λ)
= 4π/3·175/64(λ+1)^3 1_(0, 1](λ)
+ 4π/3(λ+1/4λ^4 - 81/64(λ+1)^3) 1_(1,
2)(λ)
+ 4π/3·λ-1/4λ^4 1_(1, 2)(λ)
= 4π/3·175/64(λ+1)^3 1_(0, 1](λ)
+ 4π/3(1/2λ^3 - 81/64(λ+1)^3) 1_(1,
2)(λ).
equation1
On the region t > 1, we have
u_lin(t, r) =
1+r-t/r if t-1 ≤ r < t,
1-r+t/r if t ≤ r < t+1,
0 otherwise.
Hence, for t > 1,
m(t; λ) =
4π/3·
(t+1/λ+1)^3 - (t-1/1-λ)^3 if 0 < λ <
1/t,
0 otherwise.
Therefore, the contribution of this region to the right-hand side of (<ref>) is
∫_1^∞ m(t; λ) dt
= ∫_1^∞4π/3[(t+1/λ+1)^3 -
(t-1/1-λ)^3]
1_0 < λ < 1/t(t) dt
= {∫_1^1/λ4π/3[(t+1/λ+1)^3 -
(t-1/1-λ)^3] dt }
1_(0, 1)(λ)
= 4π/3(1/2λ^3 - 4/(λ+1)^3) 1_(0, 1)(λ).
equation1
Finally, combining (<ref>) with int1, int2, and
int3 completes the proof of the lemma.
To continue, we generate further solutions of the linear wave equation using the scaling
symmetry. Specifically, for positive parameters α and ϵ, the rescaled
function u_lin^α, ϵ defined as
u_lin^α, ϵ(t, x)
α u_lin((α/ϵ)^2 t, (α/ϵ)^2 x)
solves the linear wave equation with initial data
u_0^α, ϵ(x)
u_lin^α, ϵ(0, x)
=
α u_0((α/ϵ)^2 x)
= 0,
u_1^α, ϵ(x)
∂_t u_lin^α, ϵ(0, x)
= (α/ϵ)^2 α u_1((α/ϵ)^2 x).
Under this rescaling, we have u_0^α, ϵ_Ḣ^1 = ϵu_0_Ḣ^1 and u_1^α, ϵ_L^2 = ϵu_1_L^2, so from initial-data we compute that
(u_0^α, ϵ, u_1^α, ϵ)_Ḣ^1 × L^2^2
= ϵ^2 (u_0, u_1)_Ḣ^1 × L^2^2
= 16 πϵ^2.
In particular, if F is an admissible nonlinearity for <ref> and B_η is a
corresponding ball given by <Ref>, then
(u_0^α, ϵ, u_1^α, ϵ) ∈ B_η
for all sufficiently small ϵ.
We will also rely on the observation that u_lin(t, x) = ∂_t
v_lin(t, x), where v_lin is itself a radially symmetric solution of the
linear wave equation on ×^3 with initial data
v_0(x) v_lin(0, x) =
x-2 if 0 < x≤ 1,
-1/x if x > 1∈Ḣ^1(^3),
v_1(x) ∂_t v_lin(0, x) = 0 ∈ L^2(^3).
Thus, u_lin^α, ϵ(t, x) = ∂_t v^α,
ϵ_lin(t, x), where the rescaled function v_lin^α,
ϵ defined as
v_lin^α, ϵ(t, x)
(α/ϵ)^-2α v_lin((α/ϵ)^2 t, (α/ϵ)^2 x)
solves the linear wave equation with initial data
v_0^α, ϵ(x)
v_lin^α, ϵ(0, x)
=
(α/ϵ)^-2α v_0((α/ϵ)^2 x),
v_1^α, ϵ(x)
∂_t v_lin^α, ϵ(0, x)
=
α v_1((α/ϵ)^2 x)
= 0.
Under this rescaling, we have
(v_0^α, ϵ, v_1^α, ϵ)_Ḣ^1 × L^2^2
= (α/ϵ)^-6α^2 (v_0, v_1)_Ḣ^1 × L^2^2
= 16 πϵ^6 / 3α^4.
Suppose that F and F̃ are admissible nonlinearities for <ref>.
For (t_0, x_0) ∈F∩F̃ and τ∈, define
H(τ; t_0, x_0)
e^-4τ∂ F/∂ u(t_0, x_0, e^τ) +
e^-5τ F(t_0, x_0, e^τ) ,
H̃(τ; t_0, x_0)
e^-4τ∂F̃/∂ u(t_0, x_0, e^τ) +
e^-5τF̃(t_0, x_0, e^τ) .
Then H and H̃ are bounded
and, under the hypotheses of <Ref>, we have
H * w = H̃ * w,
where
w(τ) (e^-3τ - 4e^-6τ/(e^-τ +
1)^3) 1_(0, ∞)(τ).
The proof of this proposition relies on the following result, which shows that in the Born
approximation described in Corollary <ref>, we may replace F(t,x,u) by F(t_0,x_0,
u) up to acceptable errors.
Suppose that F is an admissible nonlinearity for <ref>.
Then for all (t_0, x_0) ∈F, we have
∫_-∞^∞F(t, x, u_lin^α, ϵ(t-t_0, x-x_0))
u_lin^α, ϵ(t-t_0, x-x_0)_L^2_x dt
=
∫_-∞^∞F(t_0, x_0, u_lin^α, ϵ(t-t_0, x-x_0))
u_lin^α, ϵ(t-t_0, x-x_0)_L^2_x dt
+ [α]ϵ^8
as ϵ→ 0.
We postpone the proof of Lemma <ref> until after we have completed that of
Proposition <ref>.
We only consider the case where the scattering operators agree, as the wave operators
can be treated similarly. By time and space translation symmetry, it suffices to treat
the case (t_0, x_0) = (0,0).
Let G(t, x, u) F(t, x, u) u so that
∫_-∞^∞F(0, 0, u_lin(t, x))u_lin(t, x)_L^2_x dt
=
∫_-∞^∞∫_^3
G(0, 0, u_lin(t, x)) dx dt.
By the fundamental theorem of calculus, Fubini's theorem, and <Ref>,
∫_-∞^∞∫_^3
G(0, 0, u_lin(t, x)) dx dt
=
2
∫_0^∞∂ G/∂ u(0, 0, λ)
m(λ) dλ.
Hence,∫_-∞^∞F(0, 0, u_lin^α, ϵ(t, x))
u_lin^α, ϵ(t, x)_L^2_x dt
=
2
(α/ϵ)^-8∫_0^∞∂ G/∂ u(0, 0, λ)
m(λ/α) dλ.
Performing the change of variables λ e^τ, we obtain
∫_-∞^∞F(0, 0, u_lin^α, ϵ(t, x))
u_lin^α, ϵ(t, x)_L^2_x dt
=
2ϵ^8/α^8∫_-∞^∞∂ G/∂ u(0, 0, e^τ) e^τ
m(e^τ - logα) dτ
=
2ϵ^8/α^8∫_-∞^∞
H(τ) e^6τ
m(e^τ - logα)
dτ
=
2ϵ^8/α^8·(2α)^6 π/12∫_-∞^∞
H(τ)
·12/π
e^-6(log 2α - τ) m(e^τ - logα) dτ
=
32 πϵ^8/3 α^2∫_-∞^∞
H(τ) w(log 2α - τ) dτ
=
32 πϵ^8/3 α^2
(H * w)(log 2α),
equation1
where w(τ) = 12/π e^-6τ m(e^-(τ-log 2)) is as given by
weight.
On the other hand, if F_lin^α, ϵ(t) F(t, ·,
u_lin^α, ϵ(t)), then
∫_-∞^∞F_lin^α, ϵ(t)
u_lin^α, ϵ(t)_L^2 dt
=
∫_-∞^∞F_lin^α, ϵ(t)∂_t v_lin^α,
ϵ(t)_L^2 dt
=
∫_-∞^∞[ 0; F_lin^α, ϵ(t) ] (t) [ v_0^α, ϵ; v_1^α, ϵ ]_Ḣ^1 × L^2 dt
=
∫_-∞^∞(-t)
[ 0; F_lin^α, ϵ(t) ] dt[ v_0^α, ϵ; v_1^α, ϵ ]_Ḣ^1 × L^2.
It follows from <Ref> that agreement of the scattering operators
implies that∫_-∞^∞F_lin^α, ϵ(t)
u_lin^α, ϵ(t)_L^2 dt
=
∫_-∞^∞F̃_lin^α, ϵ(t)
u_lin^α, ϵ(t)_L^2 dt
+
(u_0^α, ϵ, u_1^α, ϵ)_Ḣ^1 ×
L^2^9·(v_0^α, ϵ, v_1^α, ϵ)_Ḣ^1 × L^2
=
∫_-∞^∞F̃_lin^α, ϵ(t)
u_lin^α, ϵ(t)_L^2 dt
+ [α]ϵ^12.
equation1
Now given a τ_0 ∈, let α1/2 e^τ_0 so that τ_0
= log 2α. Combining <Ref>, F-to-H, and
F-to-Ftilde, we deduce that
(H * w)(τ_0) = (H̃ * w)(τ_0) + 1 + ϵ^4 as ϵ→ 0.
Taking ϵ→ 0, we arrive at the conclusion.
Fix a point (t_0, x_0) ∈F and
let
G^α, ϵ(t, x, u)
F(t_0 + (α/ϵ)^-2 t, x_0 + (α/ϵ)^-2 x, u) u
so that
∫_-∞^∞F(t, x, u_lin^α, ϵ(t-t_0, x-x_0))
u_lin^α, ϵ(t-t_0, x-x_0)_L^2_x dt
=
(α/ϵ)^-8∫_-∞^∞∫_^3
G^α, ϵ(t, x, α u_lin(t, x)) dx dt.
Then the conclusion sought can be written as follows: as ϵ→ 0,
∫_-∞^∞∫_^3
G^α, ϵ(t, x, α u_lin(t, x))
- G^α, ϵ(0, 0, α u_lin(t, x)) dx dt
= [α]1.
To prove this, we first recall from the proof of <Ref> that
u_lin(t, x)
≤
2 · 1_0 < x < 2(t, x) if 0 < t < 1,
1/t· 1_t-1 ≤x < t+1(t, x) if t > 1.
Hence
∫_-∞^∞∫_^3u_lin(t, x)^6 dx dt
= 2 ∫_0^∞∫_^3 |u_lin(t, x)|^6 dx dt
1 + ∫_1^∞(1/t)^6 t^2 dt
< ∞.
Thus, given any η > 0, the dominated convergence theorem guarantees that there
exists an R > 0 (depending on η) so that
∬_t + x > R
G^α, ϵ(t, x, α u_lin(t, x)) -
G^α, ϵ(0, 0, α u_lin(t, x)) dx dt
_α∬_t + x > Ru_lin(t, x)^6 dx dt < η.
To estimate the integral in gae-limit over the complementary region t
+ x≤ R, we partition it into the sets
U_n^R (t, x) ∈×^3 : t + x≤ R and ⌈ 2α⌉ n/N
≤α u_lin(t, x)
< ⌈ 2α⌉ (n+1)/N,
where N is some large positive integer and n≤ N.
For (t, x) ∈ U_n^R, we then have
G^α, ϵ(t, x, α u_lin(t, x)) -
G^α, ϵ(t, x, ⌈ 2α⌉ n/N) _α 1/N,
G^α, ϵ(0, 0, ⌈ 2α⌉ n/N) -
G^α, ϵ(0, 0, α u_lin(t, x)) _α 1/N,
with implicit constants depending only on α. As ⌈ 2α⌉
n/N∈ℚ, replacing the true values of α u_lin with these
approximations will allow us to exploit the hypothesis that (t_0,x_0) is a
determinable point. To employ these approximations, we first note that
∬_t + x≤ R
G^α, ϵ(t, x, α u_lin(t, x)) -
G^α, ϵ(0, 0, α u_lin(t, x)) dx dt
_α∑_n≤ N∬_U_n^-1mu R|G^α, ϵ(t, x, ⌈ 2α⌉ n/N) -
G^α, ϵ(0, 0, ⌈ 2α⌉ n/N)| dx dt
+ R^4/N .
For each n, a change of variables gives
∬_U_n^R|G^α, ϵ(t, x, ⌈ 2α⌉ n/N) -
G^α, ϵ(0, 0, ⌈ 2α⌉ n/N)| dx dt
_α,Rϵ^-8∬_t-t_0+x-x_0≤ (α/ϵ)^-2 R|F(t, x, ⌈ 2α⌉ n/N) -
F(t_0, x_0, ⌈ 2α⌉ n/N)| dx dt,
which tends to zero as ϵ→ 0 because (t_0,x_0) is a determinable point.
Therefore, choosing N sufficiently large (depending on η) and then ϵ
sufficiently small (depending on η), we obtain
∬_t + x≤ R
G^α, ϵ(t, x, α u_lin(t, x)) -
G^α, ϵ(0, 0, α u_lin(t, x)) dx dt_αη.
Combining this with (<ref>) and recalling that η was arbitrary, we deduce
gae-limit.
In view of Proposition <ref>, the proof of <Ref> reduces to showing that the
convolution equation conv-eqn implies equality of the nonlinearities F and
F̃. We turn our attention to this task in the next section.
§ DECONVOLUTIONAL DETERMINATION OF THE NONLINEARITY
The final step in the proof of <Ref> consists of formally “deconvolving” both
sides of equation (<ref>) with w to arrive at H = H̃. This in turn
implies that F(t_0, x_0, ·) = F̃(t_0, x_0, ·). The tool that will
enable us to do so is a Tauberian theorem of Wiener <cit.>. For the following
formulation of the Tauberian theorem, as well as a very elegant proof, see Korevaar
<cit.>.
Let f ∈ L^1() and g ∈ L^∞(). If f * g = 0 and f̂ has no
zeroes, then g = 0.
Let w be as defined in <Ref>. Then ŵ has no zeroes.
Assuming that this proposition holds (so that Wiener's Tauberian theorem is
applicable to w), <Ref> follows immediately, as we demonstrate next.
Fix a point (t_0, x_0) ∈F∩F̃ and define H and H̃
as in <Ref> so that (H - H̃) * w = 0. It follows from
Theorem <ref> and <Ref> that H = H̃. In particular,
ddτ[ e^τ F(t_0,x_0,e^τ) - e^τF̃(t_0,x_0,e^τ)] =0,
from which it follows that F(t_0, x_0, ·) = F̃(t_0, x_0, ·).
We decompose w as w = w_0 + w_1, where
w_0(τ)
( e^-3τ/2)
1_(0, ∞)(τ),
w_1(τ)
( e^-3τ/2
-4e^-6τ/(e^-τ + 1)^3) 1_(0, ∞)(τ).
First, we compute that
ŵ_̂0̂(ξ)
= ∫_0^∞e^-3τ/2· e^-iξτ dτ
= 1/6 + 2iξ .
As 8e^-3τ≤ (e^-τ + 1)^3 ≤ 8 for all τ∈ (0, ∞),
we also have w_1(τ) ≥ 0 and so
ŵ_̂1̂(ξ)≤∫_0^∞e^-3τ - e^-6τ/2 dτ
= 1/12 .
Using the expression w0-expr for ŵ_̂0̂(ξ), we find that
ŵ_̂0̂(ξ) > 1/12 whenever ξ^2 < 27,
which implies that ŵ(ξ) > 0 for all such ξ.
To handle the remaining ξ, we integrate by parts to obtain
ŵ_̂1̂(ξ)
= ∫_0^∞d/dτ( e^-3τ/2
-4e^-6τ/(e^-τ + 1)^3)
e^-iξτ/iξ dτ.
It is straightforward to verify that
A(τ)
- d/dτ( e^-3τ/2)
and
B(τ)
d/dτ( - 4e^-6τ/(e^-τ + 1)^3)
satisfy 0 ≤2/3 B(τ) ≤ A(τ) for all τ∈ (0, ∞). Hence
ŵ_̂1̂(ξ)≤1/ξ∫_0^∞B(τ) - A(τ) dτ≤1/ξ∫_0^∞ A(τ) - 1/3 B(τ) dτ
= 1/3ξ .
Using the expression w0-expr for ŵ_̂0̂(ξ) again, we find that
ŵ_̂0̂(ξ) > 1/3ξ whenever ξ^2 > 36/5,
which implies that ŵ(ξ) > 0 for all such ξ.
As 36/5<27, we conclude that ŵ(ξ) ≠ 0 for all ξ∈, as was
to be shown.
=1em
|
http://arxiv.org/abs/2307.01922v2
|
20230704211021
|
A topological gap theorem for the $π_2$-systole of PSC 3-manifolds
|
[
"Kai Xu"
] |
math.DG
|
[
"math.DG",
"53C20, 53E10"
] |
A topological gap theorem for the π_2 – systole of PSC 3-manifolds
Kai Xu
==================================================================
Given a closed orientable 3-manifold M with scalar curvature greater than or equal to 1, it is known that if π_2(M)0 then the π_2 – systole of M is at most 8π. We prove the following gap theorem: if M is further not a quotient of S^2× S^1, then the π_2 – systole of M is no greater than an improved constant c≈ 5.44π. This statement follows as a new application of Huisken and Ilmanen's weak inverse mean curvature flow.
§ INTRODUCTION
The weak formulation of the inverse mean curvature flow, developed by Huisken and Ilmanen in <cit.>, has seen remarkable success in the study of positive scalar curvature (PSC) in dimension three. One of its most well-known applications is the proof of the Riemannian Penrose inequality for connected horizons <cit.> (a different proof of the general case was obtained by Bray <cit.>). In this paper, we exploit new use of the weak inverse mean curvature flow to obtain geometric inequalities for PSC 3-manifolds. All the manifolds that we consider are connected and oriented.
Given a closed Riemannian 3-manifold (M,g) with π_2(M)0, the π_2-systole is defined as
π_2(M,g)=inf{|S^2|_f^*g: f:S^2→ M is an immersion with [f] 0∈π_2(M)}.
It was shown by Bray, Brendle and Neves <cit.> that when π_2(M)0 one always has
π_2(M,g)·min_M R_g≤ 8π,
where R_g denotes the scalar curvature of g. (<ref>) is a consequence of the stability inequality for a smooth area minimizer in π_2(M), which always exists by a theorem of Meeks and Yau <cit.>. For the case of equality, it is shown in <cit.> that M is isometrically covered by a standard cylinder S^2×. A similar inequality for embedded ^2 is obtained By Bray, Brendle, Eichmair and Neves <cit.>. The non-compact analogue of (<ref>) is shown by Zhu <cit.>, whose proof utilizes Gromov's μ-bubbles <cit.>. See also <cit.> for another application of relevant techniques.
The new observation of this paper is the presence of a universal gap for the π_2 – systolic inequality when M is topologically different from the rigidity case of (<ref>). The following is our main theorem:
Suppose M is a closed 3-manifold such that π_2(M)0 and M is not covered by S^2× S^1. Then for any metric g on M we have
π_2(M,g)·min_M R_g≤ c,
where
c=24π·2-√(2)/4-√(2)≈ 5.44π.
It is known that a closed 3-manifold M with uniformly positive scalar curvature has macroscopic dimension at most one <cit.>, which means that at large scale M resembles a graph with thickened edges; see the figure in <cit.>. Heuristically speaking, the improved inequality (<ref>) is captured at any triple junction in the graph. Consider a decomposition of M by cutting along a maximal collection of disjoint stable minimal surfaces. This is possible for generic metrics; see also <cit.> for similar ideas of decomposition. The heuristic picture suggests that (<ref>) can be obtained from the pieces that have at least three boundary components. This observation leads to a localized version of Theorem <ref>, which we refer the reader to Theorem <ref> for details.
The proof is sketched as follows. Denote A_0=π_2(M), and let Σ⊂ M be an area minimizer with |Σ|=A_0. Let be a lift of Σ onto the universal cover of M, which we denote by . Consider a proper weak solution {Σ_t} of the inverse mean curvature flow in that starts with . The exponential growth of π_1(M) implies the existence of such solution <cit.>. Let T be the infimum of time such that Σ_T has more than one spherical component. One shows that the area of every spherical component of Σ_t is at least A_0, hence T≥log2 by the exponential growth of |Σ_t|. Therefore χ(Σ_t)≤ 2, which in turn implies an effective bound on ∫_Σ_tH^2 by the well-known Geroch monotonicity. Finally, (<ref>) follows from the monotonicity formula at t=log 2.
For a 3-manifold M with π_2(M)0 that admits PSC metrics, one can define the topological invariant
(M)=sup{π_2(M,g)·min_M R_g: g is a PSC metric on M}.
The theorem of Brendle-Bray-Neves is then interpreted as (S^2× S^1)=8π, and Theorem <ref> implies (M)≤ c if M is not covered by S^2× S^1. Our proof of Theorem <ref> does not give the sharp value of (M), and the existence of extremal metrics is not known. From the definition, it is clear that w(M) is non-decreasing when passing to finite covering spaces. Beyond these facts, little is known about this new invariant. An interesting question in this direction is how w(M) behaves when the prime decomposition of M contains a lens space factor L(p,q).
Acknowledgements. The author appreciates Hubert Bray, Demetre Kazaras and Marcus Khuri for many insightful discussions and helpful comments on the previous versions of this paper.
§ PRELIMINARIES
The main technique that we use is Huisken and Ilmanen's weak formulation of the inverse mean curvature flow (IMCF) <cit.>. Here we only include lemmas that are necessary for proving Theorem <ref>, and we refer the reader to <cit.> for rather complete introductions of this rich subject.
The classical IMCF is a parabolic flow that evolves a hypersurface by the inverse of its mean curvature:
Σ_t/ t=1/Hν.
The IMCF can be equivalently described using a parametrizing function. Let u be a function defined by requiring Σ_t={u=t}. Then we can show that u is a solution to the degenerate-elliptic equation
÷( u/| u|)=| u|.
For a star-shaped strictly mean convex hypersurface in ^n, the classical IMCF exists for all time <cit.>. However, for general hypersurfaces and on general Riemannian manifolds the IMCF may form singularity in finite time. In <cit.> Huisken and Ilmanen developed a weak formulation of IMCF by finding suitable variational principles for (<ref>). Heuristically speaking, the weak IMCF includes (<ref>) and additionally a jumping behavior, in which Σ_t immediately “jumps” to its largest area-minimizing envelope. The initial value problem for the weak IMCF is defined as follows.
Let M be a connected, complete, non-compact manifold and E_0⊂ M be a bounded smooth domain. A locally Lipschitz function u on M is called a weak solution of the IMCF with initial condition E_0, if
(1) E_0={u<0},
(2) For any compact set K⊂⊂ M∖E̅_̅0̅ and any locally Lipschitz function v such that {u v}⊂ K, we have
∫_K| u|+u| u|≤∫_K| v|+v| u|.
The weak solution u is called proper if E_t:={u<t} is bounded for all t≥0.
We record the following existence theorem and properties of weak solutions.
Let M be a connected, complete, non-compact 3-manifold that supports a uniform Euclidean type isoperimetric inequality |^*E|≥ c|E|^2/3. Then starting with any smooth bounded domain E_0⊂ M, there exists a unique proper weak solution of IMCF in the sense of Definition <ref>.
Let M be a connected, complete, non-compact 3-manifold, and u is a proper weak solution of IMCF on M with initial condition E_0 (which is a bounded smooth domain). Assume that E_0 is connected and is outward minimizing, in the sense that any bounded domain E⊃ E_0 satisfies |^*E|≥| E_0|. Then u satisfies the following properties:
(1) For all t≥0, Σ_t:= E_t is a C^1,α hypersurface. The weak mean curvature of Σ_t (denoted by H) exists for all t and is equal to | u| almost everywhere for almost every t.
(2) The area of Σ_t is exponentially growing: |Σ_t|=e^t|Σ_0|.
(3) For all t>0, E_t is connected and M∖ E_t has no compact connected component.
(4) Let R be the scalar curvature of M. The function H(t)=∫_Σ_tH^2 satisfies
H(t_2) ≤ H(t_1)-∫_t_1^t_2∫_Σ_s[2|_Σ_sH|^2/H^2+1/2|A|^2] ds
+∫_t_1^t_2[4πχ(Σ_s)-∫_Σ_sR-1/2H(s)] ds, ∀ 0≤ t_1<t_2,
where A denotes the traceless weak second fundamental form of Σ_s.
The four properties in the theorem follow from Theorem 1.3, equation (1.12), Lemma 1.6, the proof of Lemma 4.2, and Theorem 5.7 in <cit.>. Properties (2) and (4) are not hard to verify for smooth flow (<ref>). Property (4) is usually interpreted as the monotonicity of Hawking mass, which is a crucial step in Huisken and Ilmanen's proof of the Riemannian Penrose inequality. Note that H(t) is not necessarily continuous along the weak flow, thus property (4) must be expressed in integral form.
The following topological statement is a direct consequence of (3).
For every t>0, the map H_2(Σ_t,)→ H_2(M∖ E_0,) induced by embedding is injective.
Denote M_t=E_t∖ E_0 and M'_t=M∖ E_t. By property (3) we have H_3(M_t,Σ_t,)=H_3(M'_t,Σ_t,)=0. Hence
H_3(M∖ E_0,Σ_t,)≅ H_3(M_t,Σ_t,)⊕ H_3(M'_t,Σ_t,)=0
by excision theorem. The corollary now follows from the long exact sequence of relative homology.
Applying a (backward) Gronwall argument to (<ref>), we obtain the following:
Let t>0. If R≥λ on E_t∖ E_0, then
H(t)≤ H(0)e^-t/2+4π e^-t/2∫_0^t e^s/2χ(Σ_s) ds-2/3λ|Σ_0|(e^t-e^-t/2).
§ PROOF OF THE MAIN THEOREM
We first prove Theorem <ref>. Since the statement is vacuous when the scalar curvature of M is non-positive somewhere, we may assume that M has positive scalar curvature. It is known that closed manifolds with positive scalar curvature are diffeomorphic to connected sums of spherical space forms and S^2× S^1, see <cit.>. Thus we write
M=(S^2× S^1)#⋯#(S^2× S^1)#(S^3/Γ_1)#⋯#(S^3/Γ_k).
Based on this classification, there are three possible cases of the growth of π_1(M):
* M is a spherical space form, for which π_1(M) is finite.
* M is either S^2× S^1 or ^3#^3, for which π_1(M) is virtually cyclic.
* All the remaining cases, where π_1(M) has exponential growth.
Therefore, the topological condition in Theorem <ref> is equivalent to case (3).
Assume that min_M R=λ>0, and denote A_0=π_2(M). Taking a double cover if necessary, we assume that M has no ^3 factors. Then by a theorem of Meeks and Yau <cit.>, there exists an embedded sphere Σ⊂ M with |Σ|=A_0. Let be the universal cover of M, and let be an isometric lift of Σ onto . Note that must be separating. Otherwise, there is a loop in that intersects once, violating π_1()=0. Denote the two connected components of ∖ by _1 and _2.
Claim 1. Both _1, _2 are non-compact.
It follows from van Kampen's theorem that π_1(_1)=0. If _1 is compact, then by relative Poincare duality and the long exact sequence of relative cohomology, we have H_2(_1,)=H^1(_1,_1,)=H^1(_1,)=0. Hence π_2(_1)=0 by Hurewicz isomorphism. This contradicts with []0∈π_2().
Since π_1(M) has exponential growth, satisfies uniform Euclidean-type Sobolev inequality, by a theorem of Coulhon and Saloff-Coste <cit.>. Let E'_0⊂_1 be a small collar neighborhood of in _1. Consider u' a weak solution of IMCF on with initial condition E'_0, whose existence is given by Theorem <ref>. Consider a new Riemannian manifold N=_2∪_ D (D denotes a 3-disk), where we smoothly extend the metric on _2 into D. Let u be a function on N that coincides with u' on _2 and is negative in D. It follows from Definition <ref> that u is a proper weak solution of IMCF on N with initial condition E_0=D.
Claim 2. E_0 is outward minimizing in N.
By a result of the author <cit.> or Fogagnolo-Mazzieri <cit.>, E'_0 admits a bounded least area envelope F'_0 in , which means that F'_0 minimizes the perimeter among all bounded sets F'⊂ with F'⊃ E'_0. It follows directly that F_0:=(F'_0∩_2)∪ E_0 is a least area envelope of E_0 in N.
By strong maximum principle, F_0 either coincides with E_0 or has stable minimal boundary in N∖E̅_̅0̅≅_2. The former case directly implies the claim. Suppose that the latter holds. Then [ F_0]=[]0 viewed as elements in H_2(,), by Hurewicz theorem. Since has positive scalar curvature, all connected components of F_0 are spherical. Hence at least one component of F_0 is nonzero in π_2(), and we have | F_0|≥π_2()=π_2(M)=A_0=| E_0|. This shows that E_0 is outward minimizing.
Claim 2 and Theorem <ref> (2) then imply that |Σ_t|=e^tA_0 for all t>0. Define
T=inf{t>0: Σ_t has at least two spherical connected components}.
Claim 2. Each spherical component of Σ_t (t>0) has area at least A_0.
Let Σ' be a spherical component of Σ_t, which we may also view as a surface in _2 or in . By Corollary <ref>, [Σ'] is nonzero in H_2(N∖ E_0,)=H_2(_2,). By the fact that _1 is noncompact (see Claim 1) and the long exact sequence of relative homology
0=H_3(,_2,)→ H_2(_2,)→ H_2(,),
[Σ'] is nonzero in H_2(,). Since Σ' is spherical, it is a nonzero element in π_2(). Hence |Σ'|≥π_2()=π_2(M)=A_0.
By the definition of T, there is a sequence t_i→0 such that Σ_T+t_i has more than one spherical component. Therefore e^T+t_iA_0=|Σ_T+t_i|≥ 2A_0. Letting i→∞ it follows that T≥log2. Therefore, χ(Σ_t)≤ 2 for all 0≤ t<log 2. Finally, we utilize the monotonicity formula (<ref>) to obtain
0 ≤∫_Σ_tH^2
≤ 16π(1-e^-t/2)-2/3λ A_0(e^t-e^-t/2), ∀ t≤log2.
Taking t=log 2 we have
0 ≤ 16π(1-1/√(2))-2/3λ A_0(2-1/√(2)),
which implies (<ref>).
The local version of Theorem <ref>, motivated by the decomposition argument in the introduction part, is stated as follows.
Let M be homeomorphic to the 3-sphere with k disks removed (k≥3). Suppose g is a metric on M such that all components of M are stable minimal surfaces, and there is no embedded stable minimal surface in the interior of M. Let A_0 be the minimum of the area of all boundary components of M. Then we have A_0·min_M R≤ c, where c is the constant in (<ref>).
Suppose min_M R=λ>0. Let Σ_i (1≤ i≤ k) be all the connected components of M. Denote h_i=g|_Σ_i the restricted metrics on Σ_i, and choose φ_i>0 as the first eigenfunctions of the stability operator for Σ_i. It is well known that φ_i satisfy
Δ_h_iφ_i≤(K_h_i-1/2λ)φ_i,
where K denotes Gauss curvature. For T a sufficiently large constant to be chosen, let P_i be diffeomorphic to the cylinders Σ_i×[-T,T] and equipped with the warped product metrics g_i=h_i+φ_i^2dt^2 (-T≤ t≤ T). The scalar curvature of P_i is calculated as
R_g_i=2(K_h_i-Δ_h_iφ_i/φ_i)≥λ.
Let M^± be identical copies of M, whose metrics are still denoted by g. Let
N=M^-∪_ M^-(_i=1^k P_i)∪_ M^+M^+,
equiped with the C^0 metric g_N that agrees with g on M^± and agrees with g_i on P_i. Topologically, N is a connected sum of k-1 (≥ 2) copies of S^2× S^1.
Given any 0<ϵ<1/100, we have the following claims:
Claim 1. There is a sufficiently large T for which the following holds. Suppose g'_N is any smooth metric on N that agrees with g_i in each P_i and approximates g_N in the sense that ||g'_N-g_N||_C^0(g_N)≤ϵ^2. Then π_2(N,g'_N)≥(1-ϵ)A_0.
Claim 2. There exists a smooth metric g'_N that satisfies the hypotheses of Claim 1, and moreover has R_g'_N≥λ-ϵ.
The smoothness of g'_N on N makes sense as a set of coordinate charts across M^± will be specified in the proof below. With the two claims, it follows from Theorem <ref> that (λ-ϵ)·(1-ϵ)A_0≤ c, which by ϵ→0 proves our theorem.
The reason to glue P_i between M^+ and M^- is that for sufficiently large T an area minimizer in π_2(N,g'_N) does not go across any P_i. The justification of this fact relies on the classical monotonicity formula <cit.>. We claim there exists a small constant c_0 such that: if a closed g'_N-minimal surface S⊂ N intersects Σ_i×(t-1,t+1)⊂ P_i for some t∈(-T+2,T-2) and some i, then |S∩(Σ_i×(t-2,t+2))|_g_i≥ c_0. To see this, we first choose a small r_0 such that B_g_i(x,r_0)⊂Σ_i×(t-2,t+2) for any x∈Σ_i×(t-1,t+1). By the monotonicity formula, there exists r_1≤ r_0 and c_1>0 such that |S∩ B(x,r_1)|_g_i≥ c_1r_1^n whenever x∈ S∩(Σ_i×(t-1,t+1)). The constants c_1,r_0,r_1 depend only on {h_i} and {φ_i}. Now c_0=c_1r_1^n satisfies the desired property.
Choose T=16A_0/c_0. Let S be a g'_N-area minimizer in π_2(N), which is an embedded sphere. Suppose that S has nonempty intersection with the interior of P_i. By strong maximum principle, S either intersect with Σ_i×{t}⊂ P_i for every -T<t<T, or coincide with some surface Σ_i×{t}. The latter case implies |S|_g'_N=|Σ_i|_h_i≥ A_0, thus implies Claim 1. For the former case, we apply the above consequence of monotonicity formula for each slice Σ_i×[4j,4j+4], and obtain
|S|_g'_N≥|S∩(Σ_i×(-T,T))|_g_i≥ c_0·2⌊T/4⌋>2A_0,
which contradicts the minimality of S since some of the Σ_i has area A_0.
By the connectedness of S, we can now assume without loss of generality that S⊂ M^-. Since M^- is topological a sphere with disks removed, and S does not bound a 3-ball in M', it follows that S represents a nonzero element in H_2(M^-,). Now let S' be a g-area minimizer in the homology class of S in H_2(M^-,). Since there is no embedded stable minimal surface in M^-, S' must be a union of components of M^-. Hence
A_0 ≤ |S'|_g
≤ |S|_g
≤ (1+ϵ^2)|S|_g'_N
= (1+ϵ^2)π_2(N,g'_N),
which proves the claim.
The proof involves smoothing g_N near M^± while preserving scalar curvature lower bounds. To perform the smoothing, we invoke the following theorem due to Brendle, Marques and Neves <cit.>. Note that we can arbitrarily decrease ϵ in the construction.
Let M be a compact manifold with boundary M, and let g,g̃ me two metrics such that g-g̃=0 at each point on M. Moreover, assume that H_g-H_g'>0 at each point on M. Then given any number ϵ>0 and any neighborhood U of M, there is a smooth metric ĝ on M with the following property:
(1) the scalar curvature of ĝ satisfies R_ĝ(x)≥min{R_g(x),R_g̃(x)}-ϵ for every x∈ M.
(2) ĝ agrees with g outside U.
(3) ĝ agrees with g̃ in some neighborhood of M.
(4) ||ĝ-g||_C^0(g)≤ϵ^3.
Item (4) is not included in the original statement but follows directly from the proof. To achieve this, we choose the tensor T to vanish outside a sufficiently small neighborhood of M in <cit.>, then choose the parameter λ to be sufficiently large in <cit.>.
We first modify g so that M^± is strictly mean convex. Let ϵ>0 be sufficiently small. We find a function u∈ C^∞(M) such that ||u||_C^2(g)≤ϵ^4, u|_ M=0 and u/ν>0 on M. Set g'=e^2ug. The last condition ensures that M is mean convex with respect to g'. We have ||g'-g||_C^0(g)≤ϵ^3 and R_g'≥λ-ϵ, for ϵ sufficiently small.
To apply the theorem, we construct the metric g̃ that extends g_i smoothly into the interior of M^±. We slightly enlarge P_i to the cylinders Q_i=Σ_i×(-T-ϵ,T+ϵ), with metrics still of warped product form g_i=h_i+φ_i^2dt^2. Below we perform the construction to M^-, and the one for M^+ will be the same by symmetry. Let Φ_i:Σ_i×(-T-ϵ,-T]→ M^- be a regular smooth embedding, such that Φ_i maps Σ_i×{-T} identically to Σ_i⊂ M^-, and Φ_i^*g'=g_i on Σ_i×{-T}. Such a map can be constructed using normal exponential maps. Let V_i⊂ M^- be the image of Φ_i. Thus, the metrics g̃_i=(Φ_i^-1)^*g_i are defined in V_i and coincides with g' on Σ_i⊂ M^-. Moreover, Σ_i is totally geodesic with respect to g̃_i. Let g̃ be an arbitrary metric on M^- that is equal to g̃_i in a smaller neighborhood U_i⊂ V_i of Σ_i. Note that R_g̃=R_g̃_i=(Φ_i^-1)^*R_g_i≥λ in U_i. We can then apply Theorem <ref> with the neighborhood ⋃_i U_i⊃ M^- and with g' in place of g. We obtain a new metric ĝ on M^-, such that ||ĝ-g'||_C^0(g')≤ϵ^3 and R_ĝ≥λ-2ϵ. Finally, set g'_N to be equal to ĝ on M^- (and equal to the similarly constructed metric on M^+), and equal to g_i on P_i. Note that N can be expressed as
N=(M^-⊔(⋃ Q_i)⊔ M^+)/∼,
where ∼ is the equivalence relation x∼Φ_i^-1(x), ∀ x∈ V_i, with a similar relation imposed on the side of M^+. This expression naturally gives coordinate charts across M^±, under which g'_N is smooth. This completes the proof.
99
Bray_2001
H. Bray,
Proof of the Riemannian Penrose inequality using the positive mass theorem,
J. Differential Geom. 59 (2001), no. 2, 177–267.
Bray-Brendle-Neves_2010
H. Bray, S. Brendle, A. Neves,
Rigidity of area-minimizing two-spheres in three-manifolds,
Comm. Anal. Geom. 18 (2010), no. 4, 821–830.
Bray-Brendle-Eichmair-Neves_2010
H. Bray, S. Brendle, M. Eichmair, A. Neves,
Area-minimizing projective planes in 3-manifolds,
Comm. Pure Appl. Math. 63 (2010), no. 9, 1237–1247.
Brendle-Marques-Neves_2010
S. Brendle, F. C. Marques, A. Neves,
Deformations of the hemisphere that increase scalar curvature,
Invent. Math. 185 (2011), no. 1, 175–197.
Coulhon-Saloff-Coste_1993
T. Coulhon, L. Saloff-Coste,
Isopérimétrie pour les groupes et les variétés,
Rev. Mat. Iberoamericana 9 (1993), no. 2, 293–314.
Fogagnolo-Mazzieri_2022
M. Fogagnolo, L. Mazzieri,
Minimising hulls, p-capacity and isoperimetric inequality on complete Riemannian manifolds,
J. Funct. Anal. 283 (2022), no. 9, Paper No. 109638, 49 pp.
Gerhardt_1991
C. Gerhardt,
Flow of nonconvex hypersurfaces into spheres,
J. Differential Geom. 32 (1990), no. 1, 299–314.
Gromov_2021_four_lectures
M. Gromov,
Four lectures on scalar curvature,
2020, https://arxiv.org/abs/1908.10612arxiv:1908.10612.
Gromov-Lawson_1983
M. Gromov, H. B. Lawson, Jr.,
Positive scalar curvature and the Dirac operator on complete Riemannian manifolds,
Inst. Hautes Études Sci. Publ. Math. No. 58 (1983), 83-196.
Huisken-Ilmanen_2001
G. Huisken, T. Ilmanen,
The inverse mean curvature flow and the Riemannian Penrose inequality,
J. Differential Geom. 59 (2001), no. 3, 353–437.
Lee_rel
D. A. Lee,
Geometric relativity,
Graduate Studies in Mathematics, 201. American Mathematical Society, Providence, RI, 2019. xii+361 pp.
Liokumovich-Maximo_2023
Y. Liokumovich, D. Maximo,
Waist Inequality for 3-Manifolds with Positive Scalar Curvature,
Perspectives in Scalar Curvature II (2023), 799-831.
Mari-Rigoli-Setti_2022
L. Mari, M. Rigoli, A. Setti,
On the 1/H-flow by p-Laplace approximation: new estimates via fake distances under Ricci lower bounds,
Amer. J. Math. 144 (2022), no. 3, 779–849.
Meeks-Yau_1980
W. H. Meeks III, S.-T. Yau,
Topology of three-dimensional manifolds and the embedding problems in minimal surface theory,
Ann. of Math. 112 (1980), no. 3, 441–484.
Sormani_2023
C. Sormani,
Conjectures on convergence and scalar curvature,
Perspectives in scalar curvature. Vol. 2, 645–722, World Sci. Publ., Hackensack, NJ, 2023.
Urbas_1990
J. Urbas,
On the expansion of starshaped hypersurfaces by symmetric functions of their principal curvatures,
Math. Z. 205 (1990), no. 3, 355–372.
Xu_2023
K. Xu,
Isoperimetry and the properness of weak inverse mean curvature flow,
2023, https://arxiv.org/abs/2307.00725arxiv:2307.00725
Zhu_2020_arxiv
J. Zhu,
Rigidity results for complete manifolds with nonnegative scalar curvature,
2020, https://arxiv.org/abs/2008.07028arxiv:2008.07028.
Zhu_2023
J. Zhu,
Riemannian Penrose inequality without horizon in dimension three,
2023, https://arxiv.org/abs/2304.01769arxiv:2304.01769.
Kai Xu,
Department of Mathematics, Duke University, Durham, NC 27708, USA,
Email address: mailto:[email protected]@math.duke.edu.
|
http://arxiv.org/abs/2307.02238v1
|
20230705122758
|
Source Identification: A Self-Supervision Task for Dense Prediction
|
[
"Shuai Chen",
"Subhradeep Kayal",
"Marleen de Bruijne"
] |
cs.CV
|
[
"cs.CV"
] |
Shuai Chen et al.
1]Shuai Chencor1
1]Subhradeep Kayalcor1
1,2]Marleen de Bruijne
[cor1]*: S. Chen and S. Kayal contributed equally.
[1]Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands.
[2]Machine Learning Section, Department of Computer Science, University of Copenhagen, DK-2110 Copenhagen, Denmark.
August 1, 2023
August 1, 2023
August 1, 2023
August 1, 2023
Name
The paradigm of self-supervision focuses on representation learning from raw data without the need of labor-consuming annotations, which is the main bottleneck of current data-driven methods. Self-supervision tasks are often used to pre-train a neural network with a large amount of unlabeled data and extract generic features of the dataset. The learned model is likely to contain useful information which can be transferred to the downstream main task and improve performance compared to random parameter initialization. In this paper, we propose a new self-supervision task called source identification (SI), which is inspired by the classic blind source separation problem. Synthetic images are generated by fusing multiple source images and the network's task is to reconstruct the original images, given the fused images. A proper understanding of the image content is required to successfully solve the task. We validate our method on two medical image segmentation tasks: brain tumor segmentation and white matter hyperintensities segmentation. The results show that the proposed SI task outperforms traditional self-supervision tasks for dense predictions including inpainting, pixel shuffling, intensity shift, and super-resolution. Among variations of the SI task fusing images of different types, fusing images from different patients performs best.
92C5568U10
Self supervised learningDense PredictionImage Segmentation Blind Source Separation Medical Imaging
§ INTRODUCTION
The success of deep learning, and in particular convolutional neural networks (CNNs), may be partially attributed to the exponential increase in the amount of available annotated data. However, in highly specialized domains such as medical image segmentation, it is much harder to acquire precise and dense annotations. Self-supervision is one research direction that enables the network to learn from images themselves without requiring labor-consuming annotations, where the learned features might be useful for the downstream tasks, such as classification and segmentation.
In general, self-supervised learning refers to a collection of approaches that deliberately withhold information in the original data and task a neural network to predict the missing information from the existing incomplete information. In doing so, the network is encouraged to learn general-purpose features which have been found to transfer well to downstream tasks <cit.>. The self-supervision pipeline often employs a pre-train and fine-tune strategy. The first step is to pre-train a CNN on a large volume of unannotated samples using a manually designed proxy task, in which the CNN explores and learns generic features of the data itself. The learned features may contain meaningful information of the image data, e.g., intensity distribution, spatial coherence, and anatomical knowledge in medical imaging, etc., depending on how the proxy task is designed. The second step is to fine-tune this pre-trained network on the target (main) downstream task that we are more interested in, which usually has a small set of annotated data in practice. We expect that by exploiting unannotated data and restarting the training from a set of rich pre-trained features, a more robust model on the main task can be trained.
In this paper, we propose a novel self-supervision task called Source Identification (SI), which is inspired by the classic Blind Source Separation (BSS) problem. The proposed task is able to train a dense prediction network in a self-supervised manner using unlabeled data.
Contributions:
1. We propose a novel self-supervision task, SI, wherein a neural network is (pre-)trained to identify one image (source) from mixtures of images. This way, both encoder and decoder are trained and the network is encouraged to learn not only local features but also global semantic features to identify and separate the target source signal. To the best of our knowledge, this is the first BSS-like self-supervised method for deep neural networks.
2. We investigate the ill-posed source identification problem and show in which settings it can be solved by a neural network. The proposed SI method provides a straightforward way to avoid task ambiguity.
3. We conduct extensive experiments on public datasets for two medical image segmentation applications: brain tumor segmentation and white matter hyperintensities segmentation, both from brain MRI. We compare with various existing self-supervision tasks. The results show that the proposed SI method outperforms self-supervision baselines including inpainting, pixel shuffling, intensity shift, and super-resolution in segmentation accuracy in both applications.
§ RELATED WORK
§.§ Self-Supervision Tasks
Self-supervision is an active research direction in machine learning, permeating from computer vision to natural language processing <cit.>. In imaging, early self-supervision tasks could be grouped into two main categories: reconstruction based and context prediction based. For example, inpainting is a popular reconstruction based self-supervision task <cit.> where areas in an image are hidden and then reconstructed using a CNN. In a similar fashion, recolorization can be done by removing color of the image and training a CNN to recover it <cit.>, and super-resolution by recovering the original resolution of an image from a downsampled image <cit.>. On the other hand, context prediction based tasks make the network learn relationships between parts of an image, such as choosing arbitary tiles in an image and predicting their relative spatial locations <cit.>. An improved version of this method can be seen in <cit.>, where tiles were chosen, shuffled and the network was taught to identify the shuffle pattern, thereby forcing it to learn how the tiles make up the original image. Self-supervision has also been applied in medical imaging <cit.>, including inpainting <cit.> and puzzle solving by treating a 3D image as a shuffled Rubik's cube <cit.>.
All the above self-supervision tasks are designed to learn useful features from a single input image by recovering information withhold from the image itself. However, the rich information that discriminates one image from another is not explicitly considered. The source identification task in this paper aims to learn not only features that can identify each image but also features that can distinguish one image from different images within the dataset.
Our proposed task shares some similarities with the contemporary contrastive learning method <cit.>, which is also gaining popularity in medical imaging <cit.>. In contrastive learning, the neural network is tasked with recognizing the similarity or dissimilarity of a pair of images input to it, which can be categorized as a context prediction-based rather than reconstruction-based method. As an example, the state-of-art method known as SimCLR <cit.> works by drawing random samples from the original dataset, applying two augmentations (both sampled from the same family of augmentations) on the samples to create two sets of views. Then these views are passed through a CNN and a fully connected neural network layer to generate latent representations. Finally, these representations are used to train the network, such that the augmented views from the same class are pushed together and the augmented views from different classes are repelled using a contrastive loss. This may encourage the latent features to be more compact and separated, which may provide additional regularization for optimizing the network. However, most contrastive learning approaches are aimed at the downstream task of classification, pretraining only the encoder portion of the network. Thus, in this paper, we focus on the comparison between reconstruction-based methods that are more relevant to our proposed source identification task, as they pre-train the entire network and focus on dense prediction downstream tasks.
§.§ Blind Source Separation
Blind source separation (BSS), also known as signal separation, is the classic problem of identifying a set of source signals from an observed mixed signal. One example of BSS is the cocktail party problem, where a number of people are talking simultaneously in a noisy environment (a cocktail party) and a listener is trying to identify and separate a certain individual source of voice from the discussion. The human brain can handle this sort of auditory source separation problem very well, but it is a non-trivial problem in digital signal processing. Traditional methods such as independent component analysis (ICA) variants are proposed to tackle the BSS problem <cit.>. In the deep learning era, convolutional neural networks have been used to solve BSS problems in signal processing applications such as speech recognition <cit.> and target instrument separation <cit.>. These works typically employ an encoder network to learn the embeddings of the observed signals and then use traditional techniques like k-means or spectral clustering to cluster the embeddings according to the number of sources. The clustering can also be done by a deep neural network <cit.>. This paper introduces a BSS-like self-supervised task on image data, in which a neural network is trained that aims to identify and restore the source image content in mixtures with multiple images.
§.§ Relation to Denoising
A related task to the proposed source identification is denoising <cit.> which is used to identify and remove undesired imaging artifacts. In denoising, the image and the noise are regarded as two different sources and a model is trained to separate them. The statistical properties of the signal and the noise are very different, unlike in our case, where a mixed image is constructed from images belonging to the same dataset. A denoising network is likely to learn more local features to distinguish noise from clean images rather than high-level semantic features of the image content. Different from the denoising task, the proposed source identification approach tries to separate one image from a fused image with other images rather than with noise. This is a more difficult task that is more likely to capture useful semantic features from the dataset.
§.§ Relation to Mixup
Mixup was first proposed as a data augmentation strategy while training CNNs in a general setting <cit.>, and has been validated to work well in medical image segmentation as well <cit.>. Mixup, in a segmentation setting, works by randomly selecting an image pair from the training data and generating a weighted combination of the input images as well as the target segmentation maps. These generated images are then fed to a CNN during training, in addition to any other data augmentation strategies that may be suitable.
The similarity of our work with Mixup is in the way our mixed images are made, which, in our case, the network learns to identify sources from. However, our approach is a self-supervision strategy, with the aim of teaching the network useful features during pre-training, whereas Mixup is a data augmentation method. Nevertheless, in order to compare the two, we also include a set of experiments with Mixup as an additional data augmentation strategy.
§ METHODS
In Section <ref>, we provide a general definition of source identification. In Section <ref>, we discuss whether and when the source identification task is solvable for a neural network. In Section <ref>, we describe how source identification can be used as a proxy task for a self-supervised network. Lastly, we describe four popular competing baseline self-supervision tasks that we compare to in this paper in Section <ref>.
§.§ Definition of The Source Identification Problem
Consider domain D, in which each source signal can be distinguished from others, e.g., each signal is an image from a different patient in a medical imaging dataset.
Multiple (N) source signals, 𝐒_𝐍 = (s_1...s_N)^T, sampled from D are linearly `mixed' to produce M mixtures, 𝐗_𝐌 = (x_1...x_M)^T, using an M × N matrix 𝐖:
𝐗_𝐌 = 𝐖𝐒_𝐍
The blind source separation (BSS) problem is to reconstruct individual signals that constitute the mixtures without knowing the transformation 𝐖 and the original signals 𝐗.
In the context of employing neural networks for this task, in every training iteration batch we can create M̃ mixtures from Ñ samples, as allowed by the batch-size chosen. Typically, M̃ <= M and Ñ <= N. For example, two randomly sampled signals, s_1 and s_2, can result in a signal mixture, x, created by a linear combination:
x=ws_1+(1-w)s_2, w∈ [0,1]
where the weight, w, is a scalar sampled uniformly between 0 and 1. Many mixed signals created in the way above would make up a batch to train the neural network on.
To learn to separate the signals, we can train a multi-channel neural network model, f^M̃_Ñ (·;θ) parameterized by θ and with M̃ input and Ñ output channels, to learn the optimal parameters by minimizing the loss, ℒ(θ):
ℒ(θ)=1/B∑_ b ∈ 1...Bℓ(𝐒^b_Ñ, f^M̃_Ñ(𝐗^b_M̃; θ))
where, B is the batch-size, 𝐒_Ñ = (s_1...s_Ñ) is a collection of Ñ randomly picked sources such that 𝐒^b_Ñ is the b^th collection in the batch, and similarly 𝐗_M̃ = (x_1...x_M̃) is a collection of `mixtures' created using the process described in Equation (<ref>) applied to the sources in 𝐒^Ñ. In essence, the multi-channel network, f_M̃, Ñ (·;θ), consumes a single 𝐗^b_M̃ as input to produce 𝐒^b_Ñ as output, on which the loss is calculated. The loss above is depicted for one batch of one iteration through all of the data available.
The function ℓ(·,·) is composed of the L_1 and L_2 norm of the difference between the original source signal and the corresponding model output:
ℓ(𝐒_Ñ, f^M̃_Ñ(𝐗_M̃)) = 1/Ñ∑_ n ∈ 1...Ñ[ |s_n - f^M̃_n(𝐗_M̃)|_1 + ||s_n - f^M̃_n(𝐗_M̃)||_2 ]
where f^M̃_n(𝐗_M̃) is the output image from the n^th channel of the neural network acting upon the input 𝐗_M̃ described in the previous paragraph.
In the absence of any constraints or assumptions about the properties of the source signals and mixtures, the BSS problem may be ill-posed. For example, if we observe only a few mixtures for the number of source signals to be reconstructed (such that M̃ <= Ñ), and/or the overall statistical properties of the signals are not very different, then the mixed signals may not be separable by a network. We show this in the next section with a simple example, and discuss how it motivates our proposed technique.
§.§ Is Source Identification Solvable?
In this small experiment, for a given medical imaging dataset, D_I, we randomly sample two source signals (images), s_1 and s_2, for every training iteration and linearly `mix' them. The mixed signal is fed as input to a neural network while s_1 is set as the source to be reconstructed by the network. As can be observed, we have M̃ = Ñ (=1) in this case, and the sources are sampled from the same distribution, which render this ill-posed.
One way to make the reconstruction task less ambiguous would be to sample signal s_2 from a different domain than s_1, for instance by adding noise to s_2, as follows:
s'_2=(1-λ)s_2+λs_η, λ∈ [0,1]
where s_η k∼𝒩(0,1) for the kth voxel in s_N, such that when λ=1, s'_2 is purely a Gaussian noise which belongs to an obviously different domain compared to the imaging domain in dataset D_I. The new mixture, x, can be created by applying Equation (<ref>) on s_1 and s'_2, and the loss to be minimized by the neural network can be calculated using Equations (<ref>) and (<ref>) by setting M̃ = Ñ =1.
The results of a neural network (we use 2D UNet <cit.> here) optimized to minimize the reconstruction loss of s_1, s'_2 with various value of λ are visualized in Figure <ref>. It can be observed that when λ is small (0.1), the output is an average of the two images s_1 and s_2 and the model fails to separate s_1 from the mixture, x. When λ gradually increases (to 0.9), s_1 becomes clearer and better separated.
As this experiment illustrates, the network can not separate sources when they are sampled from the same distribution and mixtures are made arbitrarily. To impose extra constraints and increase separability, one simple way is to sample sources from different domains, for instance an MRI scan and Gaussian noise. However, the case λ=1 is similar to a self-supervised denoising task where the model may focus on learning the differences between image domain and noise domain. These learned features may contain trivial local patterns and may be less likely to provide useful semantic features for downstream tasks like segmentation. The technique that we propose next relies on creating more mixtures than samples to be extracted, such that M̃ > Ñ, by always having a fixed source in all of the mixtures created, such that the network can get extra information to help it identify this desired (fixed) source. We name the former variant as Denoising SI (DSI) and the latter variants are Cross-patients SI (CSI) and Within-patients SI (WSI), depending on which sources are mixed. We describe these variants in detail in the forthcoming sections.
§.§ Proposed Source Identification Task
In this paper, we propose a simple variation of the source identification task that solves the issue of the ill-posed task. In this task, we sample sources such that one of the sources is present in every input mixture, and make the only target output. This assumes the number of input mixtures, M̃, is set to two or larger. In the case of M̃=2 and Ñ=1, the proposed task would be to identify and separate the target signal, e.g., s_1, from two mixtures x_1 and x_2:
x_1=w_1s_1+(1-w_1)s_2, w_1 ∈ [0,1]
x_2=w_2s_1+(1-w_2)s_3, w_2 ∈ [0,1]
where w_1 and w_2 are scalars, sampled uniformly between 0 and 1.
Now, the loss function for this arrangement can be written (using Equation (<ref>)) as:
ℒ(θ)=1/B∑_ b ∈ 1...Bℓ(s^b_1, f^2_1((x^b_1, x^b_2);θ)
where the superscript b denotes samples from a particular batch. The order of input, (x^b_1,x^b_2) and (x^b_2,x^b_1) are equivalent since the mixtures are statistically exchangeable due to the random sampling, and both share the same ground truth, in this case s^b_1. It should be noted that even though all source signals are sampled from the same domain D_I, this task is solvable for a neural network since the target source signal is specific and invariant, and the number of mixtures is larger than the number of signals to be separated. The workflow of the proposed task is shown in Figure <ref>.
It is worth mentioning that although it is trivial to solve the linear equations in Equation (<ref>) and obtain s_1, s_2, and s_3 analytically, it is non-trivial for the network to solve when formulated as a learning problem for a neural network. For example, with the use of data augmentation as well as uniformly sampling the mixing weights while training, the possibility of getting the exact same inputs and outputs is very low, because of which it is unlikely that the network will learn to memorize patterns. This makes the proposed SI variant an efficient way to learn useful features from a dataset, without labor-consuming annotations, and avoid ambiguity at the same time. Compared to introducing a different domain to solve the ambiguity problem in Section <ref>, the proposed method focuses on the same domain which is more likely to learn useful features for the downstream tasks.
§ BASELINE SELF-SUPERVISION TASKS
We compare the proposed method to four widely used self-supervision tasks for dense prediction <cit.>. The first three tasks focus on the reconstruction and context-based prediction in an image, while the last task focuses on the intensity correction.
§.§ Inpainting
Image inpainting is the process of reconstructing the missing or damaged contents of an image, historically employed for restoring paintings and photographs <cit.>. Inpainting, as a self-supervision task, proceeds by intentionally masking selected areas within an image and a network must learn to recover the missing content.
In this paper, we implement inpainting self-supervision by overlaying an image I with a regular grid G of a fixed size and randomly masking selected grid cells. Formally, a selected grid cell of pixels, indicated as g(I), where g∈G, is transformed as:
g'(I) =
g(I) if 𝔹(γ) = 1,
0 otherwise.
where 𝔹(γ) follows a Bernoulli distribution with γ probability of being 1. γ is a hyperparameter ranged from 0 to 1. That means in any minibatch, a network only sees approximately γ random contents of the input images and tries to predict the rest of them. By masking grids in such a non-deterministic manner, we avoid cases where the network may focus on easy reconstructions and learning trivial features.
The resultant synthetic image I' is made up of all the selected grids, g'(I), thereby retaining 1-γ fraction of the original image.
§.§ Local Pixel Shuffling
Local pixel shuffling has been known to aid a network in learning about the local information within an image, without compromising the global structures <cit.>.
This task is similar to inpainting but with additional information on the distribution of intensities to inpaint. In this task, synthetic images are generated by randomly shuffling pixels within the selected grid cell, as shown in the following equation:
g'(I) =
P g(I) Q if 𝔹(γ) = 1,
g(I) otherwise.
where γ is a hyperparameter ranged from 0 to 1 similar to that in inpainting; P and Q are permutation matrices. A permutation matrix is a binary square matrix which can permute the rows of an arbitrary matrix when being pre-multiplied to it, and permute the columns when being post-multiplied. Thus, in the first case of Equation (<ref>), a new grid cell of pixels is generated by shuffling both the rows and columns of the original grid.
§.§ Super-resolution
Super-resolution can be implemented as a self-supervision task <cit.>, wherein a network is trained to deblur the low-resolution image. To create the low-resolution images from high-resolution ones for training, we blur the high-resolution images by transforming every grid cell by replacing all its values with that in the center of the grid:
g'(I) = g(I)_(w/2,h/2)
where w and h are the width and height of the grid cell g(I). In the training process, given a transformed image as input, the network learns to predict the high resolution version which is the original image before transformation.
§.§ Non-linear Intensity Shift
The intensity shift mechanism is proposed by <cit.>, where each pixel value in the image is translated monotonically using a Bezier curve (denoted as function B) <cit.>. In medical imaging, since the intensity values in a image usually correspond to the underlying anatomical details, this task can be used to encourage a network to learn useful anatomical features.
Given a voxel value v which is normalized between [0,1], end-points p_0, p_3, and two control-points p_1, p_2, the transformed value of the pixel is given by:
v' = B(v) = (1-v^3)p_0 + 3x(1-v^2)p_1
+ 3v^2(1-v)p_2 + v^3p_3
where points from p_0 to p_3 are sampled independently at every epoch from a continuous uniform distribution between 0 to 1.
§ EXPERIMENTS
§.§ Datasets
We apply our method on two medical imaging segmentation problems: brain tumor segmentation and white matter hyperintensities segmentation. Both datasets contain brain MR images.
§.§.§ BraTS Dataset
Multimodal Brain Tumor Segmentation Challenge 2018 <cit.> focuses on evaluating methods for the segmentation of brain tumors in multimodal magnetic resonance imaging (MRI) scans. There are in total 210 MR images acquired from different patients. Each MR image contains four modalities: pre-contrast T1-weighted, post-contrast T1-weighted, T2-weighted, and FLAIR. Three brain tumor classes are provided as manual annotations: 1) the necrotic and the non-enhancing tumor core (NCR&NET); 2) the peritumoral edema (ED); and 3) the enhancing tumor (ET). Since the evaluation classes of the challenge are the combined classes: whole tumor (NCR&NET+ED+ET), tumor core (NCR&NET+ET), and enhancing tumor (ET), we use these combined classes for the actual training. We randomly split the dataset in 1) 100 subjects for training the self-supervision tasks and the main segmentation task; 2) 10 subjects for validation; and 3) 100 subjects for testing. For each subject, we cropped/padded MR images into a constant size of 200×200×Z (Z is the number of axial slices of the image) where the main brain tissues are preserved. Following the preprocessing of nnUNet <cit.>, Gaussian normalization (subtracting mean and dividing by standard deviation) is applied on the brain foreground for each modality for each image individually.
§.§.§ WMH Dataset
The White Matter Hyperintensities (WMH) Segmentation Challenge <cit.> evaluates methods for the automatic segmentation of WMH in brain MR images. The provided MR images contain T1-weighted and FLAIR MR sequences and are acquired from 60 patients, where each group of 20 patients are from a different hospital. The manual segmentation of WMH lesions are also provided for each image. We randomly split the dataset in 1) 30 subjects for training the self-supervision tasks and the main segmentation task; 2) 10 subjects for validation; and 3) 20 subjects for testing. For each subject, we centre-cropped/padded MR images into a constant size of 200×200×Z, where Z is the number of axial slices in the 3D image. The cropping/padding of was necessary as images from the different hospitals have slightly different sizes and it was convenient to have images of a constant size to have all of them processed in the same way by the network. Additionally, the size of 200×200 covers the main brain tissue, which is what the network needs to consume for learning.
We use Gaussian normalization to normalize the intensities inside the brain foreground similar to the BraTS dataset.
§.§ Settings for The Proposed SI Task
There are two hyperparameters to adjust in the proposed task. The first one is the process of creating the `mixed' images. In Section <ref>, we considered the example where linear combinations of three signals s_1, s_2, and s_3 to create two mixed images. This can be generalized to choose Ñ images to create M̃ mixtures using uniformly sampled weights, such that for every mixture the sampled weights sum to 1. For example, when N = 5, Ñ = 3, M̃ = 2:
x_1=w_1s_1+w_2s_2+w_3s_3, w_1+w_2+w_3=1
x_2=w_4s_1+w_5s_4+w_6s_5, w_4+w_5+w_6=1
where all weights are scalars randomly sampled between 0 and 1 under the conditions, and the common image s_1 is to be reconstructed. Recall that N is the total pool of randomly chosen source images for the creation of mixed images in a single batch of a training iteration, while Ñ is the number of component images mixed, which is a subset of N.
The second hyperparameter is the source assignment strategy. In this paper, we consider three types of source assignment strategies:
§.§.§ Cross-patients SI
To make the network learn to identify the target source image and discriminate it from the other source images, N random patients are used to extract signals (2D slice per patient) respectively in every training sample. We refer to this SI variant as Cross-patients SI (CSI).
§.§.§ Within-patient SI
To make the network focus on each particular source in the dataset, we use the same patient image to extract all N signals (all 2D slices from the same patient). Since information from only 1 patient is used in each mixture, the network is unlikely to learn the cross-sources information among different patients. We refer to this SI variant as Within-patient SI (WSI).
§.§.§ Denoising SI
To investigate the difference between the proposed SI task and the traditional denoising task, we replace sources s_2 and s_3 in CSI with random Gaussian noise with zero mean and unit variance. This task is similar to a traditional denoising task and would encourage the network to learn representative features that distinguish differently distributed sources like image and noise explicitly. We refer to this SI variant as Denoising SI (DSI).
All experiments in Section <ref> apply the linear combination of constituting three signals in two mixtures as showed in Equation (<ref>), where in total N=5 signals are used to generate a training sample. This setting is tuned on the validation set for both datasets. To avoid the network from learning trivial features, all combinations without any overlapping between the brain regions are excluded as training samples.
§.§ Settings for The Baseline Tasks
For inpainting, the grid size is tuned from ranging [2,2] to [64,64] and the masking percentage ranging from 0% to 100%; for local pixel shuffling and super-resolution, the grid size has the same tuning range as that in inpainting. There is no hyperparameter to tune for non-linear intensity shift. All hyperparameters are tuned on the validation set on the main task performance.
§.§ Network Architecture
We use the same network backbone for both the self-supervision proxy tasks and the segmentation main task. It is based on 2D UNet <cit.> and details of the network are shown in Figure <ref>. The network has two input-output layer settings: 1) for training the proposed SI task, the input layer has T×2 channels where T is the number of imaging modalities for the two input mixtures. The output layer has T channels for reconstructing all modalities of s_1; 2) for segmentation, the input layer is replaced with a new layer with T channels for the input image x and the output layer is replaced with a layer with C channels for the segmentation predictions where C is the number of classes. All the intermediate layers are shared between the pretrained proxy task and the main task. When no pretrained network is used, the weights of all convolutional layers are initialized by Kaiming initialization <cit.>.
The choice of the network parameters are influenced by the state-of-the-art nnUNet <cit.> model, described in a Section <ref>.
§.§ Training Strategy and Data Augmentation
We conduct main experiments in a fully-supervised setting and a semi-supervised setting for both datasets.
§.§.§ Fully-supervised Setting
There are two steps to train the network in a self-supervised manner. First, we need to pre-train the network with the corresponding proxy task, as described in Sections <ref> and <ref>. The proxy task uses the same dataset as the main task; e.g., for the BraTS dataset, we pre-train and fine-tune the network on same 100 (labeled) images from the training set. A batchsize of 1 is used for the proxy task for all experiments in this paper, realized via tuning from 1 to 4, based on the validation set. Next, for the main task, we use a batchsize of 8 and 4 for training the main task in BraTS and WMH, respectively, obtained by tuning between 1 to 16 based on the validation set.
§.§.§ Semi-supervised Setting
In the fully supervised setting, we utilize the entire training dataset to pre-train and fine-tune the network. Since the strength of self-supervision comes from a network needing a much smaller volume of data to be fine-tuned, we also conduct experiments to test this hypothesis, which we call the semi-supervised setting. In this setting, the network is pre-trained on the entire training dataset but fine-tuned on only a fraction of the training data. 25 of 100 labeled images were used from the training set to fine-tune the pre-trained model for BraTS; for WMH we used only 5 images. The same batchsize is used for both proxy task and main task as was used in the fully-supervised setting.
§.§.§ Data Augmentation and Optimization Parameters
Random rotation, scaling, flipping, and elastic deformation are applied to the original 2D images as data augmentation for all experiments. Following the nnUNet paper, we use SGD optimizer and 'poly' learning rate policy (1-(epoch/epoch_max)^0.9), where epoch_max=1000 and for the BraTS dataset and 10000 for WMH, with the initial learning rate 1×10^-2, momentum 0.99, and weight decay 3×10^-5 for both the proxy task and the main task. Early stopping is applied when there is no improvement for 50 epochs to avoid overfitting to the validation set. We also tried restarting the optimization for the main task after optimizing from random initialization, which we call CNN-restart for both datasets for fair comparison.
§ RESULTS
§.§ Segmentation Results
Table <ref> shows the segmentation results for the two datasets in the fully-supervised setting. The proposed Cross-patients SI method achieves the best average performance (except TC: the tumor core in BraTS) in both datasets and shows significant improvement over the other baselines and SI variants in four out of five classes (WT: whole tumor, ET: enhancing tumor, All: WT+TC+ET, and WMH). The All class calculates the Dice coefficient of WT+TC+ET together (by concatenating the three classes but not summing up them into one class) and is the most important one in BraTS.
Among the three different settings of source identification task (CSI, WSI, and DSI), CSI achieves the best results with a Dice score of 0.861 (All) and 0.793 in BraTS and WMH datasets separately, which is significantly better than WSI and DSI. WSI and DSI have similar performance in both datasets and are not significantly different from each other. This suggests the importance of the cross-source setting. One reason could be that compared to WSI and DSI, CSI is using the data more efficiently where the network sees more source images per epoch. It should also be noted that the pixel shuffle task shows worse performance than the CNN baseline in four out of five classes (significant in TC and ET classes). In the tumor core (TC) segmentation, four methods (inpainting, intensity shift, super-resolve, and CSI) show comparable improvements to the CNN baseline (not significant to each other), which indicates the efficiency of different self-supervised methods may vary through different classes and the tumor core segmentation is more difficult to improve compared to other classes. Nevertheless, overall, the proposed CSI can provide a better starting point for the segmentation task than most of the self-supervision baseline tasks.
§.§ Semi-supervised Results
We conduct experiments on both datasets in semi-supervised settings in order to investigate how much the proposed self-supervision task would help when only a smaller amount of labeled data is available to train the proxy task. The results are shown in Table <ref>. Similar trends can be observed from these semi-supervised results compared to those in fully-supervised results. Similar to Table <ref>, the proposed CSI method gets the largest improvements in BraTS (except the tumor core) and WMH. The improvements are significant compared to all other methods in whole tumor and All in BraTS. In WMH, both the proposed CSI method and inpainting are significantly better than the other methods. It should also be noted that when only few labeled images are available, more self-supervision methods show significant improvements compared to CNN baseline (12* results in Table <ref> compared to 4* results in Table <ref>). This shows the general advantages of feature learning in self-supervision methods compared to CNN baseline.
The SI variants WSI and DSI still show close performance to each other in most classes and perform significantly worse than CSI. Similar to the fully-supervised setting, the pixel shuffle task does not show improvements compared to the CNN baseline in most classes. It should be noted that the CSI performance in semi-supervised setting (0.837 in BraTS and 0.783 in WMH) is very comparable to the fully-supervised CNN baseline result (0.846 in BraTS and 0.775 in WMH), which required 4 times more training images. Inpainting and super-resolve show better performance than CNN baseline, but still worse than CSI (significant in BraTS). The proposed method shows larger performance improvements in WMH dataset where far fewer labeled data are used compared to BraTS dataset (5 labeled vs. 25 labeled and with 4.4% vs. 3.1% Dice improvements to the CNN baseline). This shows in a practical situation in medical imaging where segmentation labels are scarce, a well-designed self-supervision task can still preserve considerable performance given enough unlabeled data.
§.§ Influence of The Number of Sources
We conduct experiments to investigate the influence of the number of images used in the proposed SI task. N = 3, 5, 7 and Ñ = 2, 3, 4 sources (e.g. in Equation (<ref>), N = 5, Ñ = 3) are tested to generate M̃ = 2 fused images as input to the network. The experiments are independent runs on BraTS and WMH dataset in fully-supervised setting. Note that the hyperparameters N and Ñ is tuned on the validation set for all experiments. The results are shown in Figure <ref>. We can see that the setting N = 5, Ñ = 3 sources setting achieves the best performance in the main segmentation task for CSI and WSI while for DSI the effect is much smaller. Too few sources may make it too easy to reconstruct the target signal which may result in trivial features, while too many sources may make it too difficult to recognize the target, resulting in arbitrary features.
§.§ Comparison to Mixup
Table <ref> shows the results to compare our proposed approach to Mixup in the fully-supervised setting. Here, the Cross-patients Source Identification (CSI) based self-supervised pre-training is compared to the baseline CNN without any pre-training, with and without Mixup as an additional data augmentation strategy. Results show that Mixup improves both the baseline and our proposed approach, with a higher relative improvement for detecting tumours in the BraTS dataset.
§ DISCUSSION
In this paper, we propose a new self-supervision task named source identification (SI) which is inspired by the blind source separation problem, and we investigate the task ambiguity in the SI problem for neural networks. Unlike most previous reconstruction-based self-supervision tasks that focus on restoring image contents from only one source image, the proposed task enables the network to see multiple images from mixtures and learn to separate the source image from the others and reconstruct it. The experiments show that the proposed method outperforms baseline methods in both datasets including the CNN baseline, restart-CNN with the initial learning rate, and commonly used self-supervised methods inpainting, pixel shuffle, intensity shift, super-resolution, and denoising. The proposed method shows the largest improvements in the semi-supervised setting when very few labeled data and many unlabeled data are available, which is a common scenario in medical imaging applications.
§.§ Comparison to Other Self-supervision Methods
One main difference between the proposed SI task and existing reconstruction-based self-supervision tasks is that SI learns features from not only the remaining part of the same distorted image but also from other images of the same domain. By distinguishing each image from others, potentially useful discriminative features can be learned while reconstructing the target image. These features may better capture general domain knowledge, e.g. anatomy and pathology knowledge, by seeing and comparing different patients' images at the same time. A proper understanding of anatomy and pathology across different individuals is required to successfully solve a single image identification and reconstruction. Features learned by SI may therefore provide a better starting point for optimization of the downstream task than the features learned by previous self-supervision tasks such as inpainting, pixel shuffling, intensity shift, super-resolution, and denoising.
In this paper, we focus on the comparison between reconstruction-based self-supervised methods, which all use the synthetic distorted image as input and the original target image as ground truth. We consider the context prediction-based methods such as tiles location prediction <cit.>, puzzle solving <cit.>, contrastive learning <cit.> as another category of self-supervised tasks. These methods optimize a predefined classification/regression task based on the information within a single image <cit.> or across different images <cit.>, and thus they usually do not train a relevant (dense) decoder. On the contrary, the reconstruction-based methods inherently require a dense decoder for learning concrete and high-resolution features and outputting dense pixelwise predictions, which may result in a model that fits better to dense prediction tasks like segmentation.
§.§ Apply SI using Unlabeled Data with Less Overfitting
Self-supervised learning allows using unlabeled data without additional annotations from experts and pretraining with both labeled and unlabeled data before fully-supervised learning. The quality of the learned features from self-supervised tasks is usually evaluated on downstream tasks like segmentation. In our experiments, larger improvements are observed in the semi-supervised setting compared to fully-supervised setting, especially for the WMH dataset. Our results show that given the same amount of unlabeled data, the proposed SI can learn more useful features from unlabeled data compared to other self-supervised tasks. One reason could be that the proposed SI task may suffer less from the overfitting problem compared to traditional methods like inpainting and super-resolution. For example, given the unlabeled data, the model may try to solve the inpainting or super-resolution task by memorizing the input images and restoring the missing content when there is enough model capacity, which may result in learning trivial features. In contrast, the SI task takes inputs from many more different combinations of images given the same amount of unlabeled data (when N = 5 in 100 images, the number of possible image combinations would be the binomial coefficient C(100,5)×5≈3.8×10^8), which makes the model more difficult to memorize and overfit to a particular image but has to find a more general way to solve the SI task, e.g. learning anatomy knowledge, which can be non-trivial and useful for downstream tasks like segmentation.
§.§ Application to Other Dense Prediction Tasks
In this paper, we apply the proposed SI method to segmentation, a dense prediction task. The pretrained SI features can also be transferred to other medical imaging dense prediction tasks such as for instance depth estimation <cit.>, image registration <cit.>, and detection based on distance maps <cit.>. Moreover, these tasks may also benefit from the cross-sources features learned in the SI method. For example, a good image registration model may require not only the alignments between local patterns across different modalities (within one patient) but also the general anatomy knowledge across different patients to constrain possible transformations. With a proper design of the proxy dataset and the SI setting, the potential scenarios to apply the proposed method can be greatly extended.
§.§ Limitations
It has been studied in literature that the performance of self-supervised approaches differ significantly based on the difficulty of the pretraining task and it's relatedness to the main task <cit.>. For example, the performance of inpainting as a self-supervision task would suffer when the size of the masked area is too large or too small. Too large the masked area, and the pretraining task would be too difficult to solve; too small and it would be very easy. This would affect the quality of the learned features, and hence the efficiency of the network on the main task. Similarly, for our approach, the performance of the network is determined by how separable the mixed images are and how much information the network needs to learn to separate them.
We indirectly test the former hypothesis in Section <ref>, where it is shown that very similar images would be extremely hard to separate. To explore whether this is a practical problem in our case, we devise a simple statistical experiment. First, pairs of 2D slices are randomly sampled from different images in one dataset (BraTS or WMH), and data augmentation is applied to them as described in Section <ref>. Next, the brain mask is extracted from the resulting images, via simple intensity based thresholding, and the overlap of the corresponding brain masks are measured using Jaccard Similarity. Finally, the distribution of the similarities measured is plotted and shown in Figure <ref>. As we can observe, the similarities almost uniformly range from very low (nearly 0) to moderately high (0.75), indicating that for our datasets, the network would receive a wide range of mixed images for training. As mentioned in Section <ref>, we exclude all mixed images with 0 similarities (no overlap at all) to avoid the network from learning trivial features. Thus, for our experiments, we do not need extra control of the degree of mixing images.
The second hypothesis revolves around how much information the network needs to learn to identify the source from the mixed images. In Section <ref> we empirically demonstrate the effect of the number of fused sources on the final performance. It is noticed that too few or too many fused sources are detrimental to the efficiency of the network.
Our proposed approach is sensitive to these two degrees of freedom and, although we have enough empirical evidence for the datasets in question, further testing is required to make a general comment about the sensitivity of our method to these two factors.
§ CONCLUSION
We propose a novel self-supervision task called source identification which is inspired by the classic blind source separation problem. The proposed task is to identify and separate a target source image from mixtures with other images in the dataset, which requires features that are also relevant for the downstream task of segmentation. On two brain MRI segmentation tasks, the proposed method provides a significantly better pretrained model for segmentation compared to other self-supervision baselines including inpainting, local pixel shuffling, non-linear intensity shift, and super-resolution in both fully-supervised and semi-supervised settings. The proposed method can be generalized to other dense prediction applications.
§ ACKNOWLEDGMENT
The authors would like to thank Gerda Bortsova and Hoel Kervadec for their constructive suggestions for the paper, and organizers of BraTS 2018 and WMH 2017 Challenges for providing the public datasets. This work was partially funded by Chinese Scholarship Council (File No.201706170040).
model2-names.bstauthoryear
|
http://arxiv.org/abs/2307.02854v1
|
20230706084133
|
Characterization of the photon emission statistics in nitrogen-vacancy centers
|
[
"Iván Panadero",
"Hilario Espinós",
"Lucas Tsunaki",
"Kseniia Volkova",
"Ander Tobalina",
"Jorge Casanova",
"Pablo Acedo",
"Boris Naydenov",
"Ricardo Puebla",
"Erik Torrontegui"
] |
quant-ph
|
[
"quant-ph"
] |
*
§
0pt10pt10pt
|
http://arxiv.org/abs/2307.01775v1
|
20230704152743
|
Equation for Aeroacoustics in a Quiescent Environment
|
[
"Tapan K. Sengupta",
"Aditi Sengupta",
"Bhavna Joshi"
] |
physics.flu-dyn
|
[
"physics.flu-dyn"
] |
Physically Realizable Natural-Looking Clothing Textures Evade Person Detectors via 3D Modeling
Zhanhao Hu^1Equal contribution. Wenda Chu^2, 1[1] Xiaopei Zhu^3, 1 Hui Zhang^4 Bo Zhang^1 Xiaolin Hu^1,5,6Corresponding author.
^1Department of Computer Science and Technology, Tsinghua University, Beijing, China
^2Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China
^3School of Integrated Circuits, Tsinghua University, Beijing, China
^4Beijing Institute of Fashion Technology, Beijing, China
^5IDG/McGovern Institute for Brain Research, THBI, Tsinghua University, Beijing, China
^6Chinese Institute for Brain Research (CIBR), Beijing, China
{huzhanha17, chuwd19, zxp18}@mails.tsinghua.edu.cn
[email protected], {dcszb, xlhu}@mail.tsinghua.edu.cn
August 1, 2023
==========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The perturbation equation for aeroacoustics has been derived in a dissipative medium from the linearized compressible Navier-Stokes equation without any assumption, by expressing the same in spectral plane as in Continuum perturbation field in quiescent ambience: Common foundation of flows and acoustics Sengupta et al., Phys. Fluids,35, 056111 (2023). The governing partial differential equation (PDE) for the free-field propagation of the disturbances in the spectral plane provides the dispersion relation between wavenumber and circular frequency in the dissipative medium, as characterized by a nondimensional diffusion number. Here, the implications of the dispersion relation of the perturbation field in the quiescent medium are probed for different orders of magnitude of the generalized kinematic viscosity, across large ranges of the wavenumber and the circular frequency. The adopted global spectral analysis helps not only classify the PDE into parabolic and hyperbolic types, but also explain the existence of a critical wavenumber depending on space-time scales.
§ INTRODUCTION
There is a long precedence of studies on wave propagation in various branches of physics, and yet there is no clear definition of waves, as noted by <cit.>. The canonical wave equation was first described by <cit.> as,
u_tt = c^2 u_xx
for the one-dimensional transverse vibration of string in tension. The analytical solution of equation (<ref>), subject to initial conditions is available in textbooks (see, e.g., <cit.> and <cit.>), which is characterized by the non-dissipative and non-dispersive nature (i.e. frequency-wavenumber independence) of the solution that is used as a benchmark for developing numerical methods in different branches of engineering and applied physics. For electromagnetic field, <cit.> obtained the wave equations for the electric field E, and the magnetic field B, with c as the speed of light in a medium of permeability μ_p and permittivity ϵ_p that fix c = 1/√(μ_p ϵ_p). An electromagnetic wave is transverse in nature, with E and B oscillating perpendicular to the direction of wave propagation.
Other notable wave phenomena governed by equation (<ref>) are given in <cit.>. The canonical wave equation in acoustics for perturbation pressure is given by <cit.>, based on Euler equation. Similarly, elastic wave propagation in solid mechanics in <cit.> relates applied strain and stress, with the longitudinal displacement u, given by equation (<ref>) and c^2 = E_0/ρ, where E_0 is the Young's modulus, and ρ is the density of the medium.
The interest in information propagation as sound and flow perturbations arose in a bid to develop an unified description for the disturbance propagation in a dissipative medium, as initiated in <cit.>. The fundamental governing equation for perturbation pressure in a quiescent medium is given by,
∂^2 p'/∂ t^2 = c^2∇^2 p' + ν_l ∂/∂ t∇^2 p'
where the generalized kinematic viscosity is defined as, ν_l = λ + 2 μ/ρ̅, which includes the effects of the losses during the propagation of the signal either as transverse or as longitudinal waves and/ or diffusive disturbances. The derivation of equation (<ref>) does not require the Stokes' hypothesis <cit.>; with the first and second coefficient of viscosity contribute to the bulk viscosity for the signal attenuation, specifically for the propagation of acoustic signal as longitudinal wave via compression and dilatation. Equation (<ref>) incorporates the first and second coefficients of viscosity via the bulk viscosity defined by μ_b = λ + 2/3μ. As reported in <cit.>, the effects of bulk viscosity for the propagation of the compression and dilatation waves are included for the onset of the Rayleigh–Taylor instability in <cit.> showing improved solution of compressible Navier-Stokes equation without invoking the Stokes' hypothesis. Readers interested for a comprehensive review of research on the effects of μ_b are referred to in <cit.> and many other references contained therein. From classical thermodynamic point of view, the Stokes’ hypothesis equates the thermodynamic pressure to the mechanical pressure for fluid flow. Furthermore for gases, violation of Stokes' hypothesis is also related to relaxation processes of vibrational modes of the polyatomic medium. Air containing diatomic nitrogen and oxygen molecules experiences inelastic transfer of energy due to collisions between these molecules, changing the translational energy resulting in changes of the normal stresses, <cit.>. Numerical estimates of μ_b for ideal /noble gases and liquids have been reported in <cit.>. Unlike in the Stokes' hypothesis, λ is independent of μ, and that can be orders of magnitude higher in value. Needless to point out that λ and μ are both dissipative, and hence must have positive sign, while Stokes' hypothesis violates this observation. Furthermore, <cit.> has shown that many common fluids and even diatomic gases, display μ_b as thousand times larger than μ.
In deriving equation (<ref>), the following polytropic relation between pressure and density perturbations has been presumed,
∂ρ'/∂ t = 1/c^2∂ p'/∂ t
However, for a more generic polytropic processes, one relates the perturbation pressure with the density perturbation as, p' = K_1 (ρ')^n. Thus, one can rewrite the above equation by,
∂ p'/∂ t = n/γ c^2 ∂ρ'/∂ t
For acoustic signal propagation in a perfect gas, the polytropic index, n, can be considered to lie between one (for isothermal process, as assumed in the beginning erroneously for sound wave propagation) and γ (which is the ratio of specific heats for constant pressure and constant volume processes) for the propagation of acoustic wave as a reversible adiabatic process by neglecting losses and heat transfer. The latter is also associated with the isentropic process assumed for the classical wave equation (<ref>). However, the propagation of sound in a dissipative medium, will entail small heat transfer due to losses, and the polytropic index (n) will be different from γ by a small amount. The small departure from isentropic condition stems from the observation for perfect gas, for which the change of entropy across a normal shock wave is known to vary as the cube of the pressure jump across it. In contrast, during the propagation of acoustic signal via compression and dilatation, the associated pressure jump will be negligibly smaller. Thus in view of reporting the linearized analysis result, one can ignore the difference between n and γ in equation (<ref>). Moreover, the dispersive nature of propagating sound waves raises concerns about assuming a single value for the speed of sound.
The propagation of noise in air requires procedures in calculating sound absorbed in standard meteorological condition, such as given by standards issued by SAE in 1975 and ANSI in 1978, whose details are available in <cit.>. The improved standards were obtained additionally using the concepts of molecular absorption of sound by nitrogen at lower frequencies and vibrational mode of energy exchange between water vapor and oxygen molecules prevalent at higher frequencies.
Studies conducted in the free-field lead to effects of nonstationarity, inhomogeneity and spreading. A comprehensive set of low frequency laboratory measurements needed to identify the relaxation frequency of nitrogen in air was met by <cit.> in the experiment performed in a resonant tube. This also provided independent sound absorption measurements overlapping those in dry air, and closing the gap between oxygen relaxation theory and experiment. Thus, one can understand the theoretical properties by studying planar propagation of the perturbation field.
For the one-dimensional planar propagation of the perturbation field, equation (<ref>) simplifies to (as given in <cit.> and derived in <cit.>),
∂^2 p'/∂ t^2 - c^2∂^2 p'/∂ x^2 - ν_l ∂^3 p' /∂ t∂ x^2 = 0
The hydrodynamic and acoustic components of the pressure field are noted over a wide magnitude scales. Thus, it is a challenge to solve simultaneously the flow and acoustic problems. The major contribution in <cit.> is the use of global spectral analysis (GSA) given in <cit.> to classify the wavy and non-wavy nature of the solutions of the space-time dependent PDEs, and is based on the necessary condition of the vanishing of the imaginary part of the amplification factors of the latter for all the physical modes. This classification method for PDEs is different from the classical approach available in textbooks, as in <cit.>. For the ease of understanding the same, a brief description is provided in the following section.
The paper is formatted in the following manner. In the next section, the analysis of space-time dependent PDEs is described. In section 3, ramifications of this classification are provided for the propagation of acoustic signal in quiescent free-field over an extended ranges of wavenumbers and circular frequencies. Also in this section, the dispersive nature of acoustic signal propagation is further discussed to highlight the difference between the classical wave equation with the space-time dependent perturbation field equation developed and presented here. Specifically, the transformation of the wave equation to diffusion equation across a critical wavenumber is presented in both the sections 3 and 4. The paper closes with a summary and conclusion in section 5.
§ GLOBAL SPECTRAL ANALYSIS OF SPACE-TIME DEPENDENT PDES
This is demonstrated with the help of the space-time dependent acoustic signal propagation equation (<ref>). This equation arises from the linearized response for free-field propagation in flows and/ or in acoustics. This is presented in the spectral plane by representing the fluctuating pressure by,
p'(x,t) = ∫∫p̂ (k,ω)e^i(kx-ω t) dk dω
The governing equation (<ref>) is represented in the spectral plane by using the above to get the dispersion relation as,
ω ^2 + iν_l k^2 ω - c^2 k^2 = 0
However, in the framework of GSA, one rewrites the perturbation pressure via the hybrid representation as,
p'(x,t) = ∫_Brp̂ (k,t)e^ikx dk
The integral on the right hand side is performed along the Bromwich contour, Br, defined in its strip of convergence as described in <cit.>. For the GSA of the governing equation, a length-scale (L_s) and a time-scale (τ_s) are introduced, so that one can define the physical amplification factor with respect to τ_s and the wavenumber (k) as,
G_1,2 (k,τ_s) = p̂(k,t+τ_s)/p̂(k,t)
To present the results in non-dimensional form, nondimensional wavenumber is introduced as kL_s and a non-dimensional time is introduced by, N_τ = cτ_s/ L_s. If L_s is the smallest resolved length scale, then kL_s will span from zero to the Nyquist limit of π as explained in <cit.>.
Using GSA, one can define the physical amplification factor as in <cit.> by,
G_1,2 (k,τ_s) = e^-iω_1,2τ_s
where ω_1,2 are obtained as the roots of the dispersion relation in equation (<ref>), given by,
ω_1,2 = -i ν_l k^2/2± kcf
where, f = √(1-(ν_l k/2c)^2) is the factor that defines the deviation of the dispersion relation from its non-dissipative, isentropic counterpart of the classical wave equation. In GSA, the wavenumber k is the independent variable, and the dispersion relation in equation (<ref>), provides the dependence of the circular frequency on k.
The complex exponents of the amplification factors indicate corresponding phase shifts given by,
β_1,2 = ± kcf τ_s
The positive value of k-dependent f shows the dispersive nature of the dissipative medium, as opposed to the non-dispersive nature of the classical wave equation. The phase speed and the phase shift are related by the nondimensional phase speeds of the acoustic equation as,
c_ph 1,2/c = β_1,2/kc τ_s = ± f
The corresponding group velocity components (v_g 1,2) of the acoustic equation are obtained as,
v_g 1,2 = d ω_1,2/dk = ± cf ∓(k ν_l )^2/4fc - i ν_l k
An important aspect of this analysis is explained next with the help of the real and imaginary parts of the first physical amplification factor written as,
(G_1)_real = e^-k^2 ν_l/2cos (kfcτ_s), (G_1)_imag = -e^-k^2 ν_l/2sin (kfcτ_s)
These two parts indicate a phase shift over τ_s to be given by β_1, as noted above.
This fixes a phase speed and group velocity over this time interval, as given above.
For a physical system that does not admit anti-diffusion, it has been shown in <cit.> that the imaginary part of v_g 1,2 must be zero. However, for a non-dissipative system: ν_l = 0, and then the governing equation becomes non-dispersive, with group velocity equal to the phase speed, as is the case for the classical wave equation, with both of these equal to c. Presence of the dissipative term makes the dynamical system diffusion-dominated, and one can define a diffusion number given by,
D_n = ν_l τ_s / L_s^2.
A typical estimate of non-zero bulk viscosity is given in <cit.>, where the general kinematic viscosity has been obtained by the regression analysis of the experimental data in <cit.>. This resulted in μ_b = 7.383 × 10^-4 + 3.381 × 10^-4 T^*, where T^* is in Kelvin. Considering air at the temperature of 20^0C and the corresponding density, dynamic viscosity, one obtains the value of λ / μ = 9520. For the analysis presented in a limited region of the (N_τ, kL_s)-plane in <cit.>, the properties have been demonstrated for D_n= 0.14. In contrast, a detailed analysis is presented here for three values of D_n = 1.4, 0.14 and 0.07 in the same region of the (N_τ, kL_s)-plane that brings out important insights on the length- and time-scales' relationship, which shows change of characteristics of the governing PDE from hyperbolic to parabolic type, depending on the length-scale for a fixed time-scale, as explained later for this sub-critical range of wavenumber.
The ramifications of the GSA in classifying space-time dependent PDEs are described in the next section.
§ CHARACTERIZATION OF SPACE-TIME DEPENDENT PDES BY GSA
The characterization of the PDE given by equation (<ref>) is explained analytically, by inspection of the amplification factors for different length and time scales. These properties are displayed in the (N_τ, kL_s)-plane in the following for the chosen diffusion number D_n, as it was reported in <cit.> for the single case of D_n =0.14 only. This way of representing the property for fixed D_n is advantageous, but the results need careful interpretation, as N_τ and D_n are both functions of τ_s. In the following, properties for three different values of D_n's are presented here. One notices the fact that the dispersion relation given by equation (<ref>) is quadratic, i.e. the dispersion relation for planar wave propagation has two modes.
In figure <ref>, the properties are shown for the case of a large diffusion number, D_n = 1.4, for both the modes. In the top frames, the imaginary part of G_1,2 are shown for this D_n, as a function of the nondimensional wave number (kL_s), and nondimensional time scale,
N_τ = c τ_s/ L_s combinations. Absence of the imaginary part of G_1,2 indicates cases when the amplification factor is strictly diffusive. Such a condition is representative of a parabolic PDE, whose amplification factor is strictly real, as shown using GSA in <cit.>. When the amplification factor is complex in the remaining part of (Nτ,kL_s)-plane, the equation (<ref>) represents an attenuated wave, which is typical of hyperbolic PDE. From the definition of D_n, N_τ and k_c = 2c/ ν_l, it is readily apparent that k_c L_s = 2N_τ /D_n. Thus, the linear relationship between k_c L_s with N_τ is readily apparent, with the former given as the boundary between the parabolic and hyperbolic PDE regions. This demarcating slanted straight line is clearly noted for both the modes to be identical, with the slope of the straight line inversely proportional to D_n.
In the middle two frames, the two components of the physical phase speed given in equation (<ref>) are plotted in the same (N_τ, kL_s)-plane for D_n= 1.4, and the region adjacent to the y-axis has the contour value of c_1ph equal to zero, that again implies the parabolic nature of the governing PDE, noted for both the modes. In the hyperbolic part of the domains in (N_τ, kL_s)-plane, with the red region indicating a right running wave, and the blue contours indicate left running wave.
In the bottom two frames, the two components of the physical group velocity given in equation (<ref>) are plotted in the same (N_τ, kL_s)-plane for D_n = 1.4, and the left portion above the k_c-line has the group velocity equal to zero, once again implying the parabolic nature of the governing acoustic equation, noted for both the energy carrying modes. In the hyperbolic part of the region in (N_τ, kL_s)-plane, the red region indicates right running wave and the blue contours indicate left running wave. The boundary between the parabolic and hyperbolic PDEs are defined from equation (<ref>) for which f=0, rendering ω_1,2 as strictly imaginary.
In figure <ref>, the results are shown for the case of D_n = 0.14, to compare with the results in figure <ref>. In the top two frames, the imaginary part of G_1,2 are shown for this D_n, in the (N_τ, kL_s)-plane for identical ranges. As before, absence of the imaginary part helps identify the values of kL_s and N_τ for which the amplification factors are strictly diffusive. From the relation among D_n, N_τ and k_cL_s, it is already noted that the demarcating straight line for k_c L_s is sloped more towards the ordinate-axis, for the lower value of D_n in figure <ref>. These non-dimensional sloping straight lines for the non-dimensional critical cut-off wavenumber are noted for both the modes in the contour plots for the phase speed and the group velocity.
In figure <ref>, the properties of propagating disturbances are shown for the case of D_n =0.07. Once again, the contour plots of the imaginary part of the amplification factors are shown in the top two frames for both the modes in the (N_τ, kL_c)-plane spanning the same ranges of the abscissa and the ordinate. Due to a further lowered value of D_n, the critical wavenumber given by the k_c L_s-line aligns more closely to the y-axis, as noted in all the six frames of this figure.
For the three values of D_n, it is noted that for the hyperbolic solution the first mode has non-dimensional phase speed ranging between 0 and +1 (right running wave), while the second mode shows this non-dimensional phase speed spanning between -1 to 0 (left running wave). The exact value of zero is noted in the range where the governing PDE is parabolic for both the modes. For the group velocity components, the modes show the wave-packet propagation speed to be zero in the parabolic region of the (N_τ, kL_s)-plane, while the hyperbolic part of the region shows non-zero values spanning between negative and positive values. The first mode displays a large negative value for the group velocity, while the positive value is restricted to +1. The magnitude of the negative group velocity increases with decreasing value of D_n. similarly, the second mode displays a large positive value, while the negative value is restricted to -1.
§.§ Alternate description of acoustic signal propagation in a dissipative medium
So far, we have noted the role of dissipative medium in propagating perturbation to be determined by the length- and time-scales, in showing the linearized governing PDE to be either parabolic or hyperbolic in nature. This is determined by the dispersion relation. The nature of acoustic signal propagation is further discussed to highlight the difference between the classical wave equation with the space-time dependent perturbation field equation developed here.
In solving equation (<ref>) by presenting the properties in terms of the amplification factor (equations (<ref>), (<ref>)), the modal phase speeds (equation (<ref>)), and the modal group velocities (equation (<ref>) in figures <ref>, <ref> and <ref>, we have used non-dimensional length-scale (L_s) and the time-scale (τ_s), with constant diffusion number (D_n) as a parameter. This helped in identifying a critical wavenumber (k_c) line in the non-dimensional (N_τ, kL_s)-plane with its equation given by, k_c L_s = 2N_τ /D_n. While each figure plotted for D_n = constant contains wealth of data in the hyperbolic region of the domain, lesser information can be gleaned in the parabolic region. Presence of such a transition from the wave equation to a diffusion equation is similar to the conjecture attributed to Kolmogorov length scale for very high Reynolds number convection dominated flow in turbulent regime, as noted in <cit.>. It has been derived from the first principles in <cit.> and here. However, one also notes the fact that for the constant-D_n plots, each and every point represents a fluid with different generalized viscosity (ν_l). To circumvent these difficulties, it is preferable to represent the properties in the same non-dimensional plane, but for ν_l = constant as the parameter. This immediately fixes the cut-off wavenumber given by, k_c = 2c/ν_l as a horizontal straight line.
For the case considered in <cit.> with measurements reported for the attenuated acoustic signal, and the case shown in figure <ref>, one notes c = 343.11 m/s for air at an ambient temperature of 20^oC and ν_l = 0.1443 m^2/s, which fixes the cut-off wavenumber as k_c= 4755.568 m^-1. The maximum wavenumber (k_max) is chosen as four times the value of k_c (which can be chosen arbitrarily, in the absence of any physical data). This, in turn helps one to choose the smallest resolved length-scale in this analysis to be given by L_s = 3.30485 × 10^-4m. For the fixed diffusion number D_n = 0.14, this corresponds to the chosen time-scale to be given by, τ_s = 1.0596 × 10^-7 s.
In the following figures, this same value of L_s = 3.30485 × 10^-4m is used for all the chosen values of ν_l. In the dimensional form, the amplification factors are given by,
G_1,2 (k,τ_s) = e^-ν_l k^2 τ_s/2 e^∓ i kL_s N_τ√(1- (k/k_c)^2)
The phase speeds are given in the dimensional form as,
c_ph 1,2 = ± c √(1- (k/k_c)^2)
The corresponding group velocity components are obtained from the real part of the following,
v_g 1,2 = ± c√(1- (k/k_c)^2)∓(k ν_l )^2/4c√(1- (k/k_c)^2) - iν_l k
It is readily apparent that the phase speed and the physically relevant group velocity (given strictly by the real part) are not functions of N_τ. It has been noted before as in <cit.> that the imaginary part of the group velocity gives rise to anti-diffusion, and thus will have no relevance for physical systems which do not admit anti-diffusion. The expressions given in equations (<ref>), (<ref>) and (<ref>) helps one to plot the solution properties in figures <ref>, <ref>, <ref> and <ref> with ν_l held as constant.
In figure <ref>, the modulus of the first amplification factor, |G_1| is shown for the four generalized kinematic viscosity (ν_l) cases from 0.001443 to 1.443, each increasing by a factor of ten. It is to be emphasized that one is considering the different magnitudes of the coefficient of viscosity, without even bringing into question the utility of Stokes' hypothesis. In all the frames, the line corresponding to k_cL_s are shown by a dotted (red) line. It is to be noted from equation (<ref>) that for k > k_c, the exponent in the second factor becomes purely real, thereby augmenting the first attenuating factor and the amplification factor shows the visible discontinuous jump across the k_c L_s-line. For the top frame in figure <ref>, this occurs for kL_s = 1.5857, and in the subsequent frames below, this value is reduced by a factor of ten. Thus, the k_c L_s-line demarcates the mathematical characteristics, where the attenuated wavy solution given by the hyperbolic PDE transforms to the diffusive solution given by the parabolic PDE. In figure <ref>, the amplitude of the second amplification factor is shown for the same generalized kinematic viscosity cases, which also displays the discontinuity across the k_c L_s-line. Above this line, a very interesting different behavior is noted for the two modes, due to the fact that the second exponent in equation (<ref>), apart from becoming real, also becomes of opposite sign. Even though the signal is created in a quiescent ambience, the property of the solution is anisotropic with respect to the spatial dimension. It has its root in the viscous term with mixed derivatives of order three. However, for k < k_c, the wavy solution is perfectly symmetric.
In figure <ref>, the phase speed of the first mode is noted as non-trivial wavy solution for k < k_c, while for k > k_c, the governing equation becomes diffusive in nature with no variation in phase with time, as the imaginary part of the amplification factor is zero. As the phase speed for the second mode is identical in magnitude,but with opposite in sign, this is not shown. In figure <ref>, the corresponding group velocity is shown for the first mode, with the regions for different ν_l demarcated between the wavy solution and diffusive solution. The second mode also display similar features with identical magnitude, but having opposite sign for the wavy solution. As k approaches the cut-off wavenumber, for a fixed frequency (or N_τ) the group velocity ceases to exist and becomes undefined.
§.§ Multi-modal behavior of the governing equation
If the imaginary part of the physical amplification factor is absent in equation (<ref>), then there will be no phase shift in the time interval of τ_s. Such a situation can arise for β_1 = m π, for all integral values of m including zero, i.e.
kcf τ_s = mπ for m = 1,2, .... ∞
For the general case, terming these wavenumbers as k_m, one can rewrite the above condition given by,
k_m √(1 - (k_m / k_c)^2) = mπ/cτ_s
where the cut-off wavenumber is defined by, k_c = 2c/ ν_l, above which ω_1,2 becomes strictly imaginary, and then G_1,2 (k, τ_s) will be strictly real, as is noted for parabolic PDEs. For k_m < k_c, the circular frequency and the physical amplification factor will be complex, and the spatio-temporal dynamics will display an attenuated wave nature, i.e. the governing equation is given by a hyperbolic PDE, with G_1,2 as complex conjugates.
With the help of the length-scale, L_s, and the non-dimensional time-scale, N_τ = cτ_s/L_s, the condition given in equation (<ref>) can be written alternately as,
N_τ k_m L_s √(1 - (k_m / k_c)^2) = mπ
One can plot the loci given by equation (<ref>) for different values of m in the (N_τ, kL_s)-plane.
Along these loci, the imaginary part of the amplification factors can be simplified as given next,
G_1 (k,τ_s) = e^-ν_l k^2 τ_s/2 e^-imπ and G_2 (k,τ_s) = e^-ν_l k^2 τ_s/2 e^imπ
This clearly shows the amplification factors to be strictly real, where the solution will be diffusive, as it has been shown in figure <ref> for different integral values of m. The locus of points in the (N_τ, kL_s)-plane for the subcritical wavenumbers are shown in the middle frame of this figure, that follows equation (<ref>). In figure <ref>, on the top and the bottom frame show the imaginary part of the amplification factors. It can be noted that one observes multi-modal diffusive nature as described in equation (<ref>). It implies that solution will be strictly diffusive for multiple such modes for different integer values of m. However, the phase speed will only vanish for m = 0.
Along the loci of figure <ref> shown in the middle frame the phase shift corresponding to β= ± mπ will give rise to the phase speed given by, β =k c_1,2τ_s, i.e. c_1,2 = ± (mπ)/(kτ_s).
As the phase speed is depicted over the extended time-scale range in figure <ref>, the corresponding group velocity plots are shown in figure <ref>. Here also, one notices vanishing values of the group velocity corresponding to two values of kL_s in the figure.
§ SUMMARY AND CONCLUSIONS
The present work uses of global spectral analysis in the theoretical framework of acoustic disturbance propagation in a quiescent ambience. It is obtained by splitting the field into a mean and its perturbation components and thereafter linearizing the compressible Navier-Stokes equation. The analysis helps one to distinguish between the wave-like propagation of the disturbance and associated diffusion equation depending upon the length scale, for a given time scale. The existence of a cutoff wavenumber, k_c, that demarcates the wave equation and the diffusion equation is a novel feature of the reported research here. This exact quantification for a one-dimensional planar signal, has similarity with the Kolmogorov's length scale which also postulates the conversion of kinetic energy (of the wave equation) to heat energy diffusing at very small scales. The dispersive nature of the governing equation raises the important question about the speed of sound for the dispersive phenomenon. The role of the generalized kinematic viscosity, ν_l is responsible for the observed dispersion and showing the same governing equation to represent wave motion for some lower wavenumber to diffusion at a higher wavenumber and beyond. We also demonstrate further the existence of gaps in the plane constituted by a non-dimensional time scale (N_τ), and nondimensional wavenumber (kL_s) when the governing equation is viewed in the enlarged domain. A multi-modal behavior is noted for such large ranges of the independent variable when one considers fluids with very small values of ν_l. The current research suggests that a careful measurement of the numerical properties of the acoustic wave propagation is essential, especially for ascertaining the roles of phase speed and group velocities of the signal.
jfm
|
http://arxiv.org/abs/2307.02836v1
|
20230706080648
|
Noise-to-Norm Reconstruction for Industrial Anomaly Detection and Localization
|
[
"Shiqi Deng",
"Zhiyu Sun",
"Ruiyan Zhuang",
"Jun Gong"
] |
cs.CV
|
[
"cs.CV",
"eess.IV"
] |
Noise-to-Norm Reconstruction for Industrial Anomaly Detection
Northeastern University, China Midea
Noise-to-Norm Reconstruction for Industrial Anomaly Detection and Localization
Shiqi Deng1,2 Zhiyu Sun2 Ruiyan Zhuang2Co-corresponding author: [email protected] Gong1Co-corresponding author: [email protected]
August 1, 2023
==============================================================================================================================================
Anomaly detection has a wide range of applications and is especially important in industrial quality inspection. Currently, many top-performing anomaly-detection models rely on feature-embedding methods. However, these methods do not perform well on datasets with large variations in object locations. Reconstruction-based methods use reconstruction errors to detect anomalies without considering positional differences between samples. In this study, a reconstruction-based method using the noise-to-norm paradigm is proposed, which avoids the invariant reconstruction of anomalous regions. Our reconstruction network is based on M-net and incorporates multiscale fusion and residual attention modules to enable end-to-end anomaly detection and localization. Experiments demonstrate that the method is effective in reconstructing anomalous regions into normal patterns and achieving accurate anomaly detection and localization. On the MPDD and VisA datasets, our proposed method achieved more competitive results than the latest methods, and it set a new state-of-the-art standard on the MPDD dataset.
§ INTRODUCTION
Anomaly detection has a wide range of applications in fields such as industrial quality
inspection <cit.>, medical diagnosis <cit.>,
and video surveillance <cit.>. In industrial quality inspection,
anomaly detection can identify and locate defects in the appearance of a product, improve product quality, and ensure standards compliance. With the emergence of modern technologies in computer vision, anomaly detection using deep learning methods has rapidly developed as an effective solution for industrial quality inspections, addressing the challenges of low efficiency and difficulty in conducting large-scale manual inspections.
Supervised methods that have high data-annotation costs and low adaptability to new defects had limited use in recent years. Therefore, most studies are now focused on unsupervised learning methods. Among these, feature-embedding-based methods <cit.>
that utilize pretrained models to extract image features and realize the measurement or comparison of features by feature modeling are widely used. However, positional consistency of the detection objects is crucial for these methods. In contrast, reconstruction-based methods <cit.>
do not have this limitation and do not require additional training data, making them suitable for various scenarios.
Reconstruction-based methods exhibit better anomaly detection and localization performances for randomly placed objects. Unlike traditional image reconstruction methods, our proposed reconstruction model uses noisy images as input; this disrupts the abnormal areas and makes it difficult to distinguish them from normal patterns, thus solving the problem of reconstructing abnormal regions owing to its strong reconstruction capabilities. In addition, our proposed reconstruction model is based on M-net <cit.>
and employs a multiscale fusion structure. Before being fed into the reconstruction network, the noisy image is down sampled to varied sizes to enlarge the model’s receptive field, providing better robustness to anomalous regions of diverse sizes. The reconstruction network comprises three parts: an encoder, a decoder, and a feature fusion module; both the encoder and decoder contain residual attention modules and skip connections between them. The feature fusion module fuses the multiscale features to generate the reconstructed image.
Numerous experiments on the MPDD <cit.> and VisA <cit.> datasets have demonstrated that the proposed end-to-end anomaly-detection method has excellent performance. The main contributions of this study are summarized as follows:
1. We introduce a novel unsupervised anomaly detection method based on the noise-to-norm paradigm.
2. We propose a residual attention module that can be embedded in the encoder and decoder to achieve high-quality
reconstruction of noisy images.
3. Our method achieves state-of-the-art (SOTA) performance on the MPDD dataset.
§ RELATED WORK
Unsupervised learning addresses the high annotation costs and difficulty in collecting negative samples, making it the mainstream method for image anomaly detection. Unsupervised learning methods can be divided into two main categories: reconstruction- and feature embedding-based methods.
§.§ Feature Embedding-based Methods
Feature embedding-based methods aim to determine a feature distribution that can distinguish between normal and anomalous samples. Typically, these methods use a pre-trained network as a feature extractor to extract shallow features from images. By fitting normal sample features to a Gaussian distribution,
Mahalanobis distance is a common method to calculates the anomaly scores <cit.> between the test set samples and the Gaussian distribution, to estimate the anomaly localization.
Research in <cit.> employed coreset-subsampled memory bank to ensure low inference cost at higher performance.
Some studies attach a normalizing flow module to the feature extractor <cit.>, features were first extracted and a normalizing flow module was utilized to enable transformations between data distributions and well-defined densities. Subsequently, anomaly detection and localization were performed based on the probability density of the feature map.
In general, feature embedding-based methods have achieved better results on the MVTec AD <cit.>
dataset than those of the reconstruction-based methods because of their powerful representation capability of deep features. However, they rely on the uniformity of an object's location, which makes optimization difficult for cases in which the object's position varies significantly.
§.§ Reconstruction-based Methods
The reconstruction-based method trains an encoder and a decoder to reconstruct images with a low dependence on pretrained models. This method aims to train a reconstruction model that works well on positive samples but poorly on anomalous regions and achieves anomaly detection and localization by comparing the original image with the reconstructed image. Early studies used Autoencoders <cit.>
for image reconstruction, whereas some methods employed a generative adversarial network <cit.>
to obtain better reconstruction performance. However, there is a problem of overgeneralization, which can lead to an accurate reconstruction of anomalous regions. To address this issue, some researchers proposed a method based on image inpainting <cit.>,
in which masks are used to remove parts of the original image, preventing the reconstruction of anomalous regions. However, for images with complex structures and irregular textures, excessive loss of the original information may limit the reconstruction ability and cause many false positives in normal regions.
§ METHOD
§.§ Overview
The proposed anomaly detection framework is based on the noise-to-norm paradigm, as shown in Fig. <ref>.
Specifically, we introduce random Gaussian noise to corrupt the original image,
and the process of adding noise ϵ is defined as follows:
x = (1-λ)x_0 + λϵ , ϵ∼N(0.5,0.5)
where λ∈(0,1),x_0 is the data obtained by normalizing each channel of the original image according to
a Gaussian distribution(μ=0.5, σ=0.5). We add random noise generated from the same Gaussian
distribution to the original image using weighted blending, thereby allowing us to control the
degree to which the noise corrupts the original image. In contrast to the methods that simulate anomalies <cit.>,
our approach of adding noise is not intended to simulate anomalies. Instead, its purpose is to completely obscure the distinguishable appearance of anomalous regions, allowing the reconstruction network to transform the anomalous image into a normal image.
After adding noise, the images were down sampled to varied sizes to serve as multiple inputs. These inputs were then utilized by the reconstruction network to generate anomaly free images. During the training phase, only anomaly free samples were used to train the reconstruction network. The reconstructed images were compared to the original images using a loss function, and the reconstruction capability of the model was continuously improved. During the inference phase, anomaly localization was achieved by generating an anomaly map that captured pixel-level differences between the reconstructed and original images. The specific details of the reconstructed network are described below.
§.§ Reconstruction Network
The overall architecture of the proposed reconstruction network is shown in Fig. 2. The network is based on the M-net <cit.>,
which originated from the field of image segmentation and has been proven to be effective in the domain of denoising. Inspired by the SRMnet <cit.>,
we incorporated pixel shuffle operations into the encoder and decoder for upsampling and downsampling; this allows us to effectively manage resolution changes in the network and improve the reconstruction quality. The residual attention modules were merged after concatenating the features to enhance the feature representation and capture the relevant information. The encoder and decoder were connected through skip connections to facilitate the flow of information between different feature levels. The multiscale features were combined in the feature fusion module to generate the final reconstructed image. This design enables the network to effectively capture anomalies and produce high-quality reconstructions.
§.§.§ Residual Attention Module
The Residual Attention Modules are integrated after the concatenation of features to enhance feature representation and capture relevant information. These modules leverage residual connections and attention mechanisms to selectively emphasize notable features and suppress irrelevant ones. By focusing on informative regions and enhancing feature discrimination, the residual attention modules improve the network’s ability to generate high-quality reconstructions. In addition, the residual connections address the issue of vanishing gradients. By propagating gradients more effectively through the network, the residual connections enable faster convergence and improve the accuracy of the model. The specific structure of the residual attention module is shown in Fig. <ref>.
It comprises global pooling, convolutional, and activation layers. In both pathways, a 1×1
convolutional layer is employed to adjust the number of feature channels.
§.§.§ Selective Kernel Feature Fusion (SKFF)
Our decoder generates four feature maps with different resolutions, and we employed the SKFF <cit.>
module for feature fusion. The SKFF allows for the selection of different convolutional kernels at different spatial positions to facilitate the fusion of features from different scales, enabling the integration of multiscale reconstruction features. This approach avoids directly connecting each feature map and instead aggregates weighted features, addressing the issues of a large number of parameters and higher computational complexity in the M-net.
§.§ Metric Function
We employed a metric function that combines the MS-SSIM and ℓ_1 proposed by Hang Zhao et al.in <cit.>. SSIM <cit.> is a
widely used indicator for measuring the structural similarity between images. The SSIM for pixel p is
defined in Eq. <ref>.
SSIM(p) = 2μ_xμ_y+C_1/μ_x^2+μ_y^2+C_1·2σ_xy+C_2/σ_x^2+σ_y^2+C_2
= l(p)· cs(p)
where the means
and standard deviations are computed using a Gaussian filter G_σ_G with a standard deviation σ_G.
MS-SSIM uses different Gaussian filters (σ =0.5, 1, 2, 4, and 8) to compute the original image and is, defined as follows:
MSSSIM(p) = l_M^α(p) ·∏_j=1^M cs_j^β_j(p)
where l_M and cs_j are the terms defined in Eq. <ref>, and the index j represents different Gaussian
filters with different σ values, For convenience, we set α = β_j =1, for j = {1,…,M}.
During the training phase,
for an image of size H × W, the MS-SSIM loss can be expressed as follows:
ℒ_MSSSIM = 1/H × W∑_p 1-MSSSIM(p)
The total loss is calculated by adding the
ℓ_1 loss multiplied by the Gaussian filter and the weighted MS-SSIM loss. This formula is shown
as Eq. <ref>.
ℒ_total = α·ℒ_MSSSIM + (1-α) · G_σ_G^M·ℒ_l_1
where α represents the weight coefficient.
During the inference stage,
we calculated the anomaly localization by computing the MS-SSIM and ℓ_1 error for each pixel.
§ EXPERIMENTS
§.§ Datasets
§.§.§ MPDD
MPDD <cit.> is a challenging dataset that focuses on detecting defects in the manufacturing process of painted metal parts. It reflects the real-world situations encountered by human workers on production lines. The dataset includes six categories of metal parts. The images were captured under various spatial orientations, positions, and distance conditions with different light intensities and non-uniform backgrounds. The training set consisted of 888 normal samples, whereas the test set consisted of 176 normal and 282 abnormal samples.
§.§.§ VisA
VisA <cit.> consists of 10,821 images. There are 9,621 normal and 1,200 abnormal images.
VisA contains 12 subsets, each corresponding to one class of objects.
We assigned 90% of the normal images to the training set, whereas
10% of the normal images and all anomalous samples were grouped as the test set.
§.§ Experimental Details
OOur studywork wais implemented in PyTtorch usingwith an NVIDIA GeForce GTX 2080Ti. We resized all the original images of the VisA and the MPDD images to 256 × 256
for both training and testing. We divided 20% of the training dataset
into validation sets. For each category of these two datasets, we utilized AdamW optimizer <cit.>
with β = (0.5, 0.999). We set the initial learning rate to 10^-6 and used cosine annealing <cit.> to adjust the
learning rate with T_max=100 and eta_min=10^-6. The maximum number of training epochs was set to 500, and the
training was stopped early if the loss did not decrease within 20 consecutive epochs.
We evaluated our approach using different metrics for comparison with other baselines. We used the area under the curve (AUC) of the receiver operating characteristic (ROC) to evaluate the performance of image-level anomaly detection and pixel-level anomaly localization.
§.§ Comparative Experiments
§.§.§ MPDD
We compared theour proposed method with several state-of-the-artSOTA methods on the MPDD dataset, including reconstruction-based methods <cit.>
and feature-based methods <cit.>.
The image-level detection results are listed in Table <ref>, and the anomaly
segmentation results are presented in Table <ref>. Experiments demonstrated that our proposed method outperformed
previous SOTA methods on the MPDD dataset.
The partial visualization results of the proposed method on the MPDD <cit.> dataset are shown in Fig. <ref>.
Specifically, as shown in Table <ref>, our proposed method achieved an overall improvement of 8.82%
compared to that of the previous best-performing method, CFLOW <cit.>.
The most significant improvement was observed in the tubes that contained multiple instances with a random distribution of positions. These results highlighted the advantages of the proposed method.
As shown in Table <ref>, the best average performance was achieved.
However, the proposed method has some limitations. We were unable to achieve satisfactory performance in the brown bracket category. Most defects in the brown bracket category are deformation defects, and our method cannot accurately restore deformations, which hinders the accurate identification of such defects.
§.§.§ VisA
Further, to validate the generalizability and versatility of our method, we compared it with other SOTA methods <cit.>
on the VisA <cit.> dataset.
The anomaly detection results for the VisA dataset are listed in Table <ref>.
Experiments demonstrated that our proposed method performed competitively on the VisA dataset.
§.§ Ablation Studies
§.§.§ Effect of λ
In this study, we employed a noise-to-norm reconstruction paradigm. To validate the effectiveness of adding noise and the effect of the noise coefficient (λ)
oon the detection results, we conducted comparative experiments. The results, as shown in Table <ref>,
demonstrate that the overall detection performance was the best when λ=0.3. Compared to that of the case without added noise (λ=0),
the detection accuracy increased by 22.28%, and the segmentation accuracy increased by 9.55%. Therefore,
we finally set λ=0.3. These experimental results confirm the significant improvement in anomaly detection achieved using the noise-to-norm reconstruction approach.
§.§.§ Importance of Residual Attention Module
To demonstrate the effectiveness of the proposed residual attention module, we conducted an ablation experiment. In the control group, we replaced the residual attention module with a
1×1 convolutional layer, which was used to change the number of feature channels. The experimental results,
as listed in Table <ref>, indicate that adding the residual attention module improved the detection accuracy by 28.68% and the segmentation accuracy by 8.94%.
This demonstrates the significance of incorporating the residual attention module into the model.
§ CONCLUSION
In this study, an industrial image anomaly detection method based on noise-to-norm reconstruction is proposed. We enhanced the M-net by incorporating a residual attention module and feature fusion, obtaining a reconstruction network. Experimental results demonstrate that our method achieves SOTA performance in anomaly detection and localization on the MPDD dataset, and it also exhibits competitive performance on the VisA dataset. Our proposed method has significant advantages for handling data with multiple instances and varying object positions. However, the proposed method has limitations in detecting object absences or displacement anomalies. In future work, we will explore methods that combine the feature distribution of positive samples with a reconstruction approach to improve the anomaly detection performance of the model.
splncs04
|
http://arxiv.org/abs/2307.03078v1
|
20230706154546
|
Primordial black holes from a curvaton scenario with strongly non-Gaussian perturbations
|
[
"Andrew D. Gow",
"Tays Miranda",
"Sami Nurmi"
] |
astro-ph.CO
|
[
"astro-ph.CO",
"gr-qc",
"hep-ph"
] |
=1
=1
fpheader
@addto@macro
=10000
a]Andrew Gow,b,c]Tays Miranda,c,b]Sami Nurmi [a]Institute of Cosmology & Gravitation, University of Portsmouth, Dennis Sciama Building, Burnaby Road, Portsmouth, PO1 3FX, United Kingdom[b]Helsinki Institute of Physics, P.O. Box 64, FIN-00014 University of Helsinki, Finland[c]Department of Physics, P.O.Box 35 (YFL), FIN-40014 University of Jyväskylä, Finland
[email protected]@[email protected] investigate the production of primordial black holes (PBHs) in a mixed inflaton–curvaton scenario with a quadratic curvaton potential, assuming the curvaton is in de Sitter equilibrium during inflation with ⟨χ⟩ =0. In this setup, the curvature perturbation sourced by the curvaton is strongly non-Gaussian, containing no leading Gaussian term. We show that for m^2/H^2≳ 0.3, the curvaton contribution to the spectrum of primordial perturbations on CMB scales can be kept negligible but on small scales the curvaton can source PBHs. In particular, PBHs in the asteroid mass range 10^-16M_⊙≲ M≲ 10^-10M_⊙ with an abundance reaching f_ PBH = 1 can be produced when the inflationary Hubble scale H≳ 10^12 GeV and the curvaton decay occurs in the window from slightly before the electroweak transition to around the QCD transition.Primordial black holes from a curvaton scenario with strongly non-Gaussian perturbations
[
========================================================================================
§ INTRODUCTION
The study of primordial black holes (PBHs) formed via gravitational collapse of large primordial density fluctuations was initiated over 50 years ago <cit.>. Already in <cit.> it was proposed that PBHs could form a cold dark matter component in the universe. The possibility that PBHs of mass M≳ 10^-10M_⊙ would constitute all of the dark matter is already ruled out by constraints from lensing, dynamical effects, structure formation and gravitational waves <cit.>. In the asteroid mass window, 10^-16M_⊙≲ M≲ 10^-10M_⊙ the constraints are uncertain and it is possible that PBHs with masses in this range could constitute a dominant dark matter component <cit.>. Even a subdominant PBH contribution to dark matter can have characteristic observational imprints, such as gravitational waves from PBH mergers testable with LIGO–Virgo–KAGRA data <cit.>. Observational signals associated to PBHs are among the very few ways to probe small scale primordial perturbations and the process responsible for their generation.
Several mechanisms for producing PBHs have been investigated in the literature, including perturbations produced during inflation <cit.>, cosmic strings <cit.>, phase transitions <cit.>, dark matter clumps <cit.>, and bouncing cosmologies <cit.>. In this work, we focus on the curvaton scenario <cit.>. Several authors have already investigated PBH formation in curvaton models <cit.> and other spectator scenarios <cit.> where the curvature perturbation can acquire non-Gaussian contributions. The effect of this non-Gaussianity is typically treated perturbatively, using the parameters f_NLetc. <cit.>. Recently, a general non-perturbative formalism for investigating the PBH abundance in the presence of non-Gaussian perturbations has been developed e.g. in <cit.>. To our knowledge, the previous analyses of PBH production in the curvaton scenario have however focused on the case where the curvaton field has a non-vanishing mean value during inflation, |⟨χ⟩| ≫ H, and the curvature perturbation contains a leading Gaussian part proportional to δχ/⟨χ⟩≪ 1. The non-Gaussianities can then be accommodated using a truncated expansion around the Gaussian part, although see <cit.> for a discussion of limitations of this method when ⟨χ⟩≫ H.
In this work, we will study PBH formation in the curvaton setup with a quadratic potential, assuming ⟨χ⟩ = 0. This is the equilibrium configuration of a light spectator during de Sitter inflation <cit.>, provided the potential is minimized at χ=0. Even if one starts from an initial configuration with a non-zero ⟨χ⟩, the distribution is rapidly driven towards the equilibrium during de Sitter inflation <cit.>[When the inflationary solution deviates from de Sitter, the relaxation towards the de Sitter equilibrium may happen at a slower rate, or there may also be parameter combinations for which the equilibrium is not reached <cit.>.]. For ⟨χ⟩ =0, perturbations of the curvaton energy density obey a Gaussian squared distribution and are large, ⟨δρ_χ^2⟩/⟨ρ_χ⟩^2 = 2 <cit.>. Consequently, the curvature perturbation component sourced by the curvaton has no leading Gaussian part and it can not be expanded in small perturbations. Therefore, one needs to consider the full non-linear and non-Gaussian solution for the curvature perturbation when investigating the PBH formation. The curvature perturbation ζ on CMB anisotropy scales is Gaussian to a high precision <cit.> and contributions sourced by the Gaussian squared δρ_χ/⟨ρ_χ⟩ must be strongly suppressed on these scales. If the curvaton spectrum is sufficiently blue-tilted, it can however give a dominant contribution to ζ on small scales k≫ Mpc^-1 where there are no constraints on non-Gaussianity. We focus on such a setup, investigating a mixed inflaton–curvaton scenario with a strongly blue tilted curvaton component which dominates the curvature perturbation on small scales and sources a fully non-Gaussian ζ with no leading Gaussian component. On the CMB scales, we require that the spectrum of ζ is dominated by a Gaussian inflaton component and contributions from the curvaton are suppressed to the 10^-11 level at the CMB pivot scale k_* =0.05 Mpc^-1. We note that PBHs in a partially related phenomenological setup with a Gaussian blue tilted spectator curvature perturbation component was investigated in <cit.>.
We use the δ N approach <cit.> to obtain a non-linear solution for the superhorizon scale curvature perturbation. We expand the curvaton field χ( x) in spherical harmonics and, following <cit.>, truncate the expansion to the monopole when considering fluctuations relevant for PBHs. Within this approximation, we compute the probability distribution for C_l(r) which determines the compaction function C(C_l(r)). The compaction function C(r) equals the comoving gauge density contrast smoothed over a radius r. Using C_ c =0.55 <cit.> as the PBH collapse threshold and modeling the PBH mass with the collapse parameters obtained in <cit.> for the power law spectrum, we explore the fraction of dark matter in PBHs f_ PBH and the mass distribution f(M) as functions of the curvaton model parameters. We show that the curvaton scenario with the quadratic potential and the equilibrium configuration ⟨χ⟩ = 0 can lead to very efficient PBH production. In particular, we find that the scenario can produce asteroid mass PBHs, 10^-16M_⊙≲ M≲ 10^-10M_⊙, with f_ PBH = 1 when the inflationary Hubble scale H≳ 10^12 GeV, the curvaton mass m^2/H^2 ≳ 0.3 and the curvaton decay occurs in the window from slightly before the electroweak transition to around the QCD transition.
The paper is organised as follows. In Sec. <ref> we present the setup and in Sec. <ref> we compute the probability distribution for the compaction function. In Sec. <ref> we collect the expressions for the PBH abundance f_ PBH and the mass distribution f(M). In Sec. <ref> we present our main results and summarise the discussion in Sec. <ref>.
§ THE MIXED INFLATON–CURVATON SETUP
We investigate the PBH abundance in a mixed inflaton–curvaton scenario where the inflaton generates the Gaussian, nearly scale invariant curvature perturbations on CMB scales and curvaton-sourced perturbations dominate on small scales relevant for PBH formation. We assume the quadratic curvaton potential
V(χ)=1/2m^2 χ^2 .
We further assume that the curvaton distribution during inflation follows the de Sitter equilibrium result with a vanishing mean value ⟨χ⟩ = 0 <cit.>. For the quadratic potential, the curvaton χ is a Gaussian field and for m^2/H^2 < 3/2 its spectrum at the end of inflation on superhorizon scales is given by the standard power-law expression <cit.> P_χ(k)= (H/2π)^2 (k/k_ end)^3-2ν2^2ν-1Γ(ν)^2/π , ν=√(9/4-m^2/H^2) .
We denote the Hubble scale at the end of inflation by H≡ H(t_ end), and k_ end = a(t_ end) H(t_ end) is the mode exiting the horizon at the end of inflation. Perturbations of the curvaton energy density
δρ_χ( x)/⟨ρ_χ⟩ = ρ_χ( x)-⟨ρ_χ⟩/⟨ρ_χ⟩=χ^2( x)/⟨χ^2⟩-1,
obey Gaussian squared statistics, and the perturbations are large since ⟨δρ_χ^2( x)⟩/⟨ρ_χ⟩^2 = 2. Consequently, any contribution to the curvature perturbation sourced by δρ_χ/⟨ρ_χ⟩ must be suppressed on the large scales k≲ Mpc^-1 probed by the CMB and LSS data <cit.>.
Using the δ N formalism <cit.>, the superhorizon curvature perturbation ζ in the mixed scenario with inflaton sourced perturbations in the radiation component and the perturbed curvaton component obeys the non-linear equation
<cit.>
e^4ζ-Ω_χe^3ζ_χe^ζ+(Ω_χ-1)e^4ζ_ r=0 .
The individual curvature perturbations of the radiation ζ_ r and curvaton ζ_χ fluids, and Ω_χ are given in terms of spatially flat gauge quantities by the expressions,
ζ_ r( x) = 1/4 lnρ_ r( x)/⟨ρ_ r⟩ , ζ_χ( x) = 1/3 lnρ_χ( x)/⟨ρ_χ⟩ , Ω_χ = ⟨ρ_χ⟩/⟨ρ_ r⟩+⟨ρ_χ⟩ .
Both ζ_ r and ζ_χ are separately conserved, i.e. constant in time. For the quadratic potential Eq. (<ref>), the curvaton component ζ_χ can be written in terms of the field χ as
ζ_χ=1/3 lnχ^2/⟨χ^2⟩ .
Here and in the rest of the text we use χ≡χ_ end to denote the curvaton field at the end of inflation.
The solution for the fourth order algebraic equation (<ref>) can be written as <cit.>ζ = ζ_ r + ln(K^1/2(4-Ω_χ/12)^1/3[1+(3 Ω_χK^-3/2e^3(ζ_χ-ζ_ r)/4-Ω_χ-1)^1/2] ) ,
K = 1/2[P^1/3+4Ω_χ-4/4-Ω_χ(12/4-Ω_χ)^1/3P^-1/3] ,
P = (
3 Ω_χe^3(ζ_χ-ζ_ r)/4-Ω_χ)^2+[(
3 Ω_χe^3(ζ_χ-ζ_ r)/4-Ω_χ)^4+12/4-Ω_χ(4-4Ω_χ/4-Ω_χ)^3]^1/2 .
Here the only time dependent quantity is the curvaton density parameter Ω_χ. Assuming the curvaton is subdominant at the onset of oscillations, which we define as H(t_ osc) ≡ m, the density parameter for t≫ t_ osc can be written as
Ω_χ=Ω_χ, osc/Ω_χ, osc+a_ osc/a≈0.136⟨χ^2⟩/0.136⟨χ^2⟩ +a_ osc/aM_ P^2 .
Here we used that ρ_χ, osc≈ (1/2)m^2 0.816 χ^2 which follows from solving the curvaton equation of motion in a radiation dominated universe. We approximate the curvaton decay as an instant process at H(t_ dec) = Γ. Within this approximation ζ≡ζ(t > t_ dec) = ζ(t_ dec) is obtained by evaluating Eq. (<ref>) at the moment t_ dec.
We assume the perturbation of the radiation component ζ_ r is entirely sourced by the inflaton and uncorrelated with the curvaton, ⟨ζ_ rζ_χ⟩ = 0. We further assume ζ_ r obeys Gaussian statistics with the power law spectrum
P_ζ_ r(k)=A_ r(k/k_*)^n_ r-1 ,
where k_* =0.05 Mpc^-1. As explained above, we want to realise a scenario where the Gaussian inflaton sourced perturbations ζ_ r dominate the two point function of ζ on large scales and generate the observed CMB spectrum with P_ζ(k_*)=A_ s=2.10 × 10^-9, n_ s= 0.965 <cit.>. Correspondingly, contributions from ζ_χ to the spectrum of ζ must be sufficiently suppressed. To this end, we require
P_ζ_χ(k_*) < 2 × 10^-11 ,
which, as we show in Appendix <ref>, allows us to obtain the observed CMB spectrum with A_ r∼ 10^-9 and |n_ r-1|∼ 0.01, assuming Ω_χ≲ 0.9. Note that the underlying computation is somewhat non-trivial as ζ is a strongly non-linear function of the non-Gaussian ζ_χ. The condition (<ref>) constrains the curvaton mass from below as we show in detail in Appendix <ref>, and implies a strongly blue tilted spectrum both for the curvaton field Eq. (<ref>) and for ζ_χ. On small scales, k≫ Mpc^-1, the connected correlators of ζ are dominated by ζ_χ which, as we show below, can lead to efficient formation of PBHs. For the maximal inflationary Hubble scale H ≈ 5.2 × 10^13 GeV consistent with the observational bound on the tensor-to-scalar ratio r_ T < 0.044 <cit.>, Eq. (<ref>) implies m^2/H^2 ≳ 0.29, assuming instant transition from inflation to radiation domination. The lower bound on m^2/H^2 slowly grows as a function of decreasing H. For example, for H = 1.6× 10^11 GeV, Eq. (<ref>) implies m^2/H^2 ≳ 0.31, see Fig. <ref> in Appendix <ref>.
§ PROBABILITY DISTRIBUTION OF DENSITY FLUCTUATIONS
The central quantity in determining if an overdense region collapses into a primordial black hole is the compaction function C(r) <cit.> which for spherical overdensities equals the comoving density contrast coarse-grained with a top hat window function over a spherical volume of comoving radius r. Denoting the background equation of state by w =⟨ p⟩ /⟨ρ⟩, the expression for C(r) can be written as <cit.>
C(r)=C_l(r) - 1/2f(w)C_l(r) ^2 ,
where
f(w) = 6(1+w)/5+3w ,
and C_l(r) is determined by the curvature perturbation ζ as
C_l(r) = - f(w) rζ'(r) .
Here the prime denotes a derivative with respect to r.
Around the high density peaks relevant for the PBH formation, spherical symmetry can be expected to be a reasonable first approximation. We implement the approximation following <cit.> by expanding the Gaussian field χ in spherical harmonics
χ(x) =∫k/(2π)^3χ_ k4π∑_l,mi^lj_l(kx)Y_lm(x̂)Y_lm^⋆(k̂) ,
and retaining only the leading monopole term of the expansion
χ(r) =∫ k/(2π)^3j_0(kr)χ_ k .
Here j_0(z)=sin(z)/z. Since Eq. (<ref>) is a linear map from χ(x), the field χ(r) and its derivative χ'(r) are Gaussian fields with the joint probability distribution given by
P_χχ'(χ,χ')=1/2π√(|Σ|) exp(-1/2X^TΣ^-1X) , X^T = (χ,χ') , Σ^-1=[ σ^2_χχ σ^2_χχ^'; σ^2_χχ' σ^2_χ'χ'; ] .
The components of the covariance matrix depend on r and can be written as
σ^2_χχ(r) = ⟨χ(r)χ(r)⟩=∫ lnk j_0^2(kr) P(k) ,
σ^2_χχ'(r) = ⟨χ(r)χ'(r)⟩=∫ lnk j_0'(kr)j_0(kr) P(k) ,
σ^2_χ'χ'(r) = ⟨χ'(r)χ'(r)⟩ = ∫ lnk (j_0'(kr))^2 P(k) ,
where a prime denotes a derivative with respect to r and P(k) is the power spectrum of the full curvaton field χ( x)⟨χ_ kχ_ k'⟩ = (2π)^3 δ( k+ k')2π^2/k^3 P(k) .
In our case P(k) is given by Eq. (<ref>).
We denote the variance of the full field χ( x) by σ^2,
σ^2≡⟨χ^2 ( x)⟩ = ∫ lnk P(k) .
We proceed to substitute χ( x) →χ(r) in Eqs. (<ref>) and (<ref>) in places where the field χ( x) appears with no brackets, but evaluate the background quantities that depend on ⟨χ^2 ⟩ in Eq. (<ref>) using the variance of the full field (<ref>). Using ⟨χ(r)^2⟩ < ⟨χ( x)^2 ⟩ would give a higher probability for larger ζ_χ values and hence enhance the PBH abundance but it is hard to quantify to what extent this is a spurious effect of the monopole truncation. We will therefore conservatively use ⟨χ( x)^2 ⟩ in the background quantities.
In this setup, Eq. (<ref>) takes the form
C_l(r) = - r f(w)χ'(r)∂_χζ(χ(r)) ,
where ∂_χζ≡∂ζ/∂χ is obtained by differentiating Eq. (<ref>). The probability distribution of C_l is given by
P_C_l(C_l,r) = ∫∫χ dχ' P_χχ'(χ,χ')δ[C_l+
r f(w)χ'(r)
∂_χζ(χ(r))] .
Carrying out the integral over χ', we obtain
P_C_l(C_l,r) = 1/2 π f(w) r |Σ(r)|^1/2×
∫χ/|∂_χζ| exp[-1/2|Σ(r)|(σ^2_χ'χ'(r)χ^2+2σ^2_χχ'(r)χ C_l/f(w)r∂_χζ+σ^2_χχ(r)C_l^2/(f(w)r∂_χζ)^2)] ,
where the remaining integral needs to be computed numerically.
§ EXPRESSION FOR THE PBH ABUNDANCE
The mass of a PBH formed by a collapsing region with compaction C can be approximated by
M =K M_ H(C-C_ c)^γ ,
obtained by fitting to numerical simulations <cit.>. Here M_ H = 4π/3 H^-3ρ is the mass within a Hubble volume at the collapse time, γ depends on the equation of state, and K and the collapse threshold C_ c depend on both the equation of state and the shape of the collapsing overdensity. In a radiation dominated universe γ≈ 0.36. For monochromatic PBHs formed during radiation domination from a Gaussian ζ with the spectrum P_ζ(k)∝δ (k-k_*), K= O(1) and C_ c≈ 0.59, and the overdensity peaks at the comoving scale r_*≈ 2.74/k_* <cit.>. For a Gaussian ζ with a nearly scale invariant power-law spectrum and radiation domination, K≈ 4 and C_ c≈ 0.55, and the overdensity generated by a mode k_* peaks at r_* = 4.49/k_* <cit.>.
There are no existing numerical collapse simulations corresponding to our case with a fully non-Gaussian ζ on small scales. We adopt a phenomenological approach and set the collapse parameters equal to the Gaussian power-law results in radiation domination, γ = 0.36, C_ c = 0.55, and K=4. Therefore, we will compute the PBH mass using
M(C,r) = 4 M_ H(r)(C -0.55)^0.36 .
We evaluate M_ H(r) at the horizon entry of the smoothing scale a(t_ r) H(t_ r) r = 1,
M_ H(r) = .4 π/3 H(t_ r)^-3ρ(t_ r)|_a(t_ r) H(t_ r) r =1 .
In our setup, the curvaton contribution to the energy density will in general not be negligible at t_ r and the universe is therefore not fully radiation dominated. However, decreasing the pressure decreases C_ c and using the threshold C_ c for radiation domination we should be estimating the PBH abundance from below. In any case, our results should be regarded as order of magnitude estimates both due to the use of Eq. (<ref>) and due to the monopole truncation Eq. (<ref>).
The probability that a spherical overdensity coarse grained over the comoving radius r collapses into a PBH upon horizon entry equals the probability that C exceeds the threshold C_ c. The contribution of the PBHs to the total energy density at the collapse time can then be written as
β(r) ≡ρ_M(r)/ρ.0pt3 ex|_t_ r = ∫_C_ c^f(w)/2 C M(C,r)/M_ H(r)P_C(C,r)
= ∫_C_l, c^f(w) C_l K (C_l(r) - 1/2f(w)C_l(r)^2 - C_ c)^γP_C_l(C_l,r) ,
where C_l, c= f(w)(1-√(1-2 C_ c/f(w))), we used C P_C(C,r) = C_l P_C_l(C_l,r), and P_C_l(C_l,r) is given by Eq. (<ref>). The upper limit C = f(w)/2 is the largest fluctuation amplitude that forms a type I overdensity for which the areal radius is a monotonic function of the coordinate r <cit.>.
The fraction of the present day dark matter energy density constituted by the PBHs reads
f_ PBH(r) ≡ρ_M(r)/ρ_ DM.0pt3 ex|_t_0 = Ω_ m,0/Ω_ DM,01/k_ eq rβ(r) ,
where k_ eq = (a H)_ eq at the matter radiation equality, and we have omitted the O(1) factor (g_*(t_ r)/g_*(t_ eq))^-1/6 from the effective number of relativistic degrees of freedom.
This can be recast as
f_ PBH(r) = ∫ d ln M f(M) ,
with the mass distribution function f(M) given by
f(M) = Ω_ m,0/Ω_ DM,01/k_ eq r(C_l-1/2f(w)C_l^2 -C_ c)^γ+1/γ(1-1/f(w)C_l)P_C_l(C_l,r) ,
where C_l≡ C_l(M) = f(w)(1-√(1-(2/f(w))(C_ c+(M/(KM_ H))^1/γ)), as obtained from Eq. (<ref>).
§ RESULTS
It is straightforward to compute the variances Eq. (<ref>) using Eq. (<ref>) and numerically perform the integral in Eq. (<ref>) to find the probability distribution P_C_l(C_l,r). Using Eqs. (<ref>) and (<ref>) we then get f_ PBH(r) as a function of the coarse-graining scale r.
Figure <ref> illustrates the typical shape of the function f_ PBH(r) and the curvaton density parameter Ω_χ at the horizon crossing of r, which enters in the computation via Eq. (<ref>). The abundance f_ PBH(r) has a clearly peaked structure although the curvaton spectrum Eq. (<ref>) is of pure power law form. This is a generic feature in the setup and it arises from an interplay of two opposite effects. First, increasing the coarse-graining scale r makes the variance σ^2_χχ(r) smaller, see Eq. (<ref>), and therefore suppresses the probability of large χ(r) values that can source PBHs. Second, the curvature perturbation ζ and Ω_χ keep growing in time until the curvaton decay at H(t_ dec) = Γ. For t_ r < (t_ dec), increasing r corresponds to later horizon crossing times t_ r, making ζ(t_ r) larger and enhancing the probability for PBH formation. This effect dominates to the left of the f_ PBH(r) peak in Fig. <ref>, and the peak corresponds to t_ r=t_ dec. To the right of the peak, ζ stays constant as t_ r>t_ dec. In this region, increasing r only acts to decrease σ^2_χχ(r) and therefore f_ PBH(r) starts to decrease. In the following, we will choose the coarse-graining scale r equal to the peak scale by setting r=r_ dec≡ 1/(a(t_ dec) H(t_ dec)), and define f_ PBH = f_ PBH(r_ dec).
Figures <ref> and <ref> illustrate the behaviour of f_ PBH as a function of the decay rate Γ, the inflationary Hubble scale H, and the curvaton mass parameter m^2/H^2. Varying the decay rate Γ alters the curvaton density parameter Ω_χ at t_ r = t_ dec. As seen in Fig. <ref>, f_ PBH is a strongly dependent function of Ω_χ. Increasing Γ moves the decay to earlier times and decreases Ω_χ for fixed H,m^2/H^2 values. This explains the strong decrease of f_ PBH as a function of Γ in Figs. <ref> and <ref>. The figures further show that, for a fixed Γ, increasing m^2/H^2 or decreasing H causes f_ PBH to decrease. The former is because larger m^2/H^2 makes the curvaton spectrum Eq. (<ref>) more blue-tilted, decreasing the variance σ_χχ^2(r), Eq. (<ref>), and therefore suppressing the probability for large χ(r) values. The latter is because the mean curvaton energy at the end of inflation ⟨ρ_χ⟩∝ m^2 ⟨χ^2⟩∝ H^4 <cit.>, and decreasing H therefore decreases Ω_χ for fixed Γ and m^2/H^2 values.
The observational constraints on the PBH abundance f_ PBH depend on the PBH mass which, setting r=r_ dec in Eqs. (<ref>) and (<ref>), in our setup is parametrically proportional to
M_ H = 4 π M_ P^2 Γ^-1≈ 1.3× 10^-15 M_⊙(Γ/5 × 10^-14 GeV)^-1 .
Black hole evaporation sets very strong constraints for PBHs with M≲ 10^-16 M_⊙. Constraints for M≳ 10^-10 M_⊙ PBHs from various different systems range downwards from f_ PBH = O(10^-2) depending on the mass <cit.>. In the asteroid mass window 10^-16 M_⊙≲ M ≲ 10^-10 M_⊙, the constraints are subject to significant uncertainties and it can be possible to have f_ PBH = 1 in this window <cit.>. Interestingly, asteroid mass PBHs can be efficiently produced in the curvaton scenario. This is demonstrated in Fig. <ref> which shows the PBH mass spectra f(M) computed using Eq. (<ref>) for Γ = 5.5× 10^-18 GeV, Γ = 6.5× 10^-16 GeV and Γ = 5.5× 10^-14 GeV, with m^2/H^2=0.31, and H chosen in each case such that f_ PBH≈ 0.10. The corresponding curvaton decay temperatures are T_ dec∼ 2 GeV for Γ = 5.5× 10^-18 GeV, T_ dec∼ 20 GeV for Γ = 6.5× 10^-16 GeV, and T_ dec∼ 200 GeV for Γ = 5.5× 10^-14 GeV, approximating the curvaton decay into radiation and the thermalisation of the decay products as instant processes, and using the Standard Model g_*(T). The respective mass spectra in Fig. <ref> are peaked at M∼ 10^-11 M_⊙, M∼ 10^-13 M_⊙ and M∼ 10^-15 M_⊙. In all three cases, the PBH abundance can be increased by slightly increasing H, see Fig. <ref>. For example, f_ PBH≈ 1.0 is obtained with H≈1.85× 10^12 GeV, H≈ 4.25× 10^12 GeV, and H≈ 9.19× 10^12 GeV for Γ = 5.5× 10^-18 GeV, Γ = 6.5× 10^-16 GeV, and Γ = 5.5× 10^-14 GeV, respectively.
Figures <ref>, <ref> and <ref> represent the main results of this work. In particular, they show that the curvaton scenario can generate a significant dark matter fraction consisting of asteroid mass scale PBHs when the inflationary Hubble scale is large enough H≳ 10^12 GeV, the curvaton mass parameter m^2/H^2 ≳ 0.3 and the decay rate falls in the window 10^-18 GeV≲Γ≲ 10^-13 GeV, corresponding to decay temperatures from slightly above the electroweak transition to around the QCD transition scale.
If the curvaton decay occurs earlier, 10^-13 GeV≲Γ≲ 10^-8 GeV, the scenario generates PBHs with M≲ 10^-16 M_⊙ and, according to the dependencies shown in Figs. <ref> and <ref>, the observational constraints on f_ PBH constrains the viable parameter space for H from above and for m^2/H^2 from below. For Γ≳ 10^-8 GeV there are no constraints as the the PBH abundance is exponentially suppressed for any H ≲ 5.2 × 10^13 GeV compatible with the observational bound on the tensor-to-scalar ratio r_ T < 0.044 <cit.>, and m^2/H^2 in the range compatible with Eq. (<ref>)[More precisely, for the maximal Hubble scale H = 5.2 × 10^13 GeV, Eq. (<ref>) implies m^2/H^2 ⩾ 0.29, see Fig. <ref> in Appendix <ref>. The maximal PBH abundance is obtained for m^2/H^2 = 0.29 and we find f_ PBH < 10^-10 for Γ > 7.45 × 10^-9 GeV.]. For Γ≲ 10^-18 GeV, the scenario generates PBHs with M≳ 10^-10 M_⊙ but in this region our numerical integration of Eq. (<ref>) starts to become inaccurate for configurations leading to f_ PBH≳ 0.01. We are therefore not able to perform a detailed study of the Γ≲ 10^-18 GeV region in this work.
Finally, Fig. <ref> depicts the probability distribution P_C_l(C_l,r_ dec) given by Eq. (<ref>) for the same choice of parameters as in Fig. <ref>, and in the third panel of Fig. <ref>. For comparison, we also show a Gaussian distribution with the variance equal to the variance computed from the full distribution for this set of parameters, ∫ C_l C_l^2 P_C_l(C_l,r_ dec)≈ 5.1× 10^-4. The full distribution deviates significantly from the Gaussian case and decays much slower as a function of C_l. The slowly decaying tail of P_C_l(C_l,r_ dec) is essential for the PBH formation in our setup. We recall, that the fully non-Gaussian form of P_C_l(C_l,r_ dec) follows from the vanishing mean of the curvaton field ⟨χ⟩ =0 which in turn is the equilibrium configuration during inflation for the χ^2 potential. The curvature perturbation component ζ_χ∝ lnρ_χ/⟨ρ_χ⟩ = lnχ^2/⟨χ^2⟩ has no leading Gaussian term and there is no suppression for fluctuations χ^2/⟨χ^2⟩∼ 1. Together with the non-linear form of Eq. (<ref>), this gives rise to the fully non-Gaussian distribution of C_l seen in Fig. <ref>.
§ SUMMARY
In this work we have investigated the PBH production in the mixed inflaton–curvaton scenario with a quadratic curvaton potential and a strongly blue-tilted curvaton spectrum, assuming the curvaton is in the de Sitter equilibrium ⟨χ⟩ = 0 during inflation. We require that the inflaton sourced Gaussian component dominates the spectrum of the curvature perturbation ζ on CMB scales and the curvaton sourced contribution is suppressed below the 2× 10^-11 level at the pivot scale k_*=0.05 Mpc^-1. This constrains the curvaton mass from below, for example for the inflationary Hubble scale H = 10^13 GeV, the curvaton mass must satisfy m^2/H^2 ≳ 0.3. On small scales, however, the curvaton sourced component of ζ can take large values leading to PBH formation.
A key feature in this setup, is that the curvaton sourced part of ζ is fully non-Gaussian. It contains no leading Gaussian term, unlike in scenarios with a non-vanishing curvaton mean value |⟨χ⟩|≫ H (a specific initial condition during inflation) which have been studied previously in the PBH context e.g. in <cit.>. We use the δ N formalism to obtain the full non-linear solution for ζ as a function of χ and, following <cit.>, include only the monopole term of the spherical harmonics series of χ( x) when considering fluctuations relevant for PBHs. With this approximation, we compute the probability distribution for C_l(r) which determines the compaction function C(C_l(r)), and where r is the coarse-graining scale of perturbations. Comparing to a fiducial Gaussian distribution with the same variance ⟨ C_l^2⟩, we find that the full distribution P_l(C_l,r) decreases exponentially slower as a function of C_l. We use C_ c =0.55 as the threshold for the PBH collapse <cit.> and model the PBH mass with parameters obtained for the power-law spectrum in <cit.>.
We find that the curvaton scenario with ⟨χ⟩ = 0 can lead to very efficient production of PBHs. In particular, the setup can generate asteroid mass PBHs, 10^-16M_⊙≲ M≲ 10^-10M_⊙, with an abundance equal to the observed dark matter abundance, f_ PBH = 1, when the Hubble scale at the end of inflation H≳ 10^12 GeV, the curvaton mass m^2/H^2≳ 0.3 and the curvaton decay occurs in the window from slightly before the electroweak transition to around the QCD transition. If the curvaton decays before or after the aforementioned window, PBHs with masses below or above the asteroid mass range can be generated, respectively. The PBH abundance depends sensitively on H, m^2/H^2 and the curvaton decay rate Γ, and the observational bounds on f_ PBH imply non-trivial constraints on these parameters when Γ≲ 10^-8 GeV. For larger values of Γ the PBH production in the setup is exponentially suppressed.
The main uncertainties in our results arise from the monopole truncation Eq. (<ref>) and the phenomenological choice of collapse parameters in Eq. (<ref>). Changing the collapse parameter values would affect f_ PBH predicted for a fixed set of curvaton parameters. However, even order of magnitude changes of f_ PBH are compensated by just slight changes of H, m^2/H^2 and Γ, as can be seen in Figs. <ref> and <ref>. The error caused by the use of Eq. (<ref>) is harder to quantify but we expect that the very efficient PBH production from the strongly non-Gaussian ζ is a robust conclusion.
AG acknowledges support from the Science and Technology Facilities Council [grant numbers ST/S000550/1, ST/W001225/1]. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising. Supporting research data are available on reasonable request from the corresponding author.
§ THE SPECTRUM OF Ζ Χ
We use the stochastic formalism <cit.> and the spectral expansion method <cit.> to compute the infrared spectrum of ζ_χ in our setup where ⟨χ⟩ = 0 and Eq. (<ref>) cannot be expanded in small perturbations around a mean field. For the quadratic curvaton potential (<ref>), the joint equal time two-point distribution of χ(t, r) in de Sitter equilibrium is given by the spectral sum <cit.>ρ_2(χ, r,t;χ', r',t) =m^2/H^4ψ_0(x)ψ_0(x')∑_n=0^∞ψ_n(x)ψ_n(x')(a(t)H| r- r^'|)^-2Λ_n/H ,
where
x=mχ/H^2 , Λ_n =nm^2/3H , ψ_n(x)=1/√(n!2^n)(4π/3)^1/4e^-2π^2x^2/3H_n(2π x/√(3)) ,
and H_n(x) = (-1)^ne^x^2/2^n/ x^ne^-x^2/2 are the Hermite polynomials. The eigenfunctions ψ_n(x) are orthonormal,
∫_-∞^∞ x ψ_m(x)ψ_n(x) = δ_mn .
Using that ⟨χ^2⟩ = 3H^4/(8π^2m^2), we have from Eq. (<ref>)ζ_χ =(1/3) ln (8π^3x^2/3), and the connected part of its two-point function can be written as
⟨ζ_χ( r)ζ_χ( r')⟩_ c = ∫_-∞^∞χ∫_-∞^∞χ' ζ(χ)ζ(χ')ρ_2(χ, r,t;χ^', r',t)
= ∑_n=1^∞(∫_-∞^∞ x 1/3ψ_0(x)ψ_n(x)lnx^2)^2(a(t)H|r-r^'|)^-2 n m^2/3H^2 .
Truncating the series at the leading order we get
⟨ζ_χ( r)ζ_χ( r')⟩_ c = 2/9(aH | r- r'|)^-4 m^2/3H^2+𝒪[(aH | r- r'|)^-8 m^2/3H^2] .
The corresponding power spectrum is given by
P_ζ_χ = k^3/2π^2∫^3r e^-ik·r⟨ζ_χ( r)ζ_χ( r')⟩_ c
= 2^3-4m^2/3H^2/9√(π)Γ(3/2-2m^2/3H^2)/Γ(2m^2/3H^2)(k/aH)^4m^2/3H^2 .
The existence of the Fourier transform requires 4m^2 < 9 H^2 which we assume here.
Assuming an instant transition from de Sitter inflation to radiation dominated epoch, and approximating the universe as radiation dominated until the curvaton decay[This is strictly valid for Ω_χ≪ 1 but suffices here because we only consider cases where the curvaton decays before it fully dominates the universe, and therefore the period when Ω_χ≪1 may not hold is short.], we can write
k/aH = 6× 10^-24k/0.05 Mpc^-110^15 GeV/ρ^1/4 ,
where ρ = 3 H^2 M_ P^2 denotes the energy density during inflation. Using Eqs. (<ref>) and (<ref>) we can directly solve for the m^2/H^2 range where the condition Eq. (<ref>) is satisfied, i.e.P_ζ_χ(k_*) < 2 × 10^-11. The result is shown in Fig. <ref> where the shaded orange area marks the region where Eq. (<ref>) holds.
We will also need the two point functions of the powers ζ^ℓ_χ in the analysis below. These can be directly computed using the spectral expansion expression
⟨ζ_χ^ℓ( r)ζ_χ^ℓ'( r')⟩_ c = ∫_-∞^∞χ∫_-∞^∞χ' ζ^ℓ(χ)ζ^ℓ'(χ')ρ_2(χ, r,t;χ^', r',t)
= ∑_n=1^∞∑_n'=1^∞(∫_-∞^∞ x 1/3ψ_0(x)ψ_n(x)lnx^2)^ℓ(∫_-∞^∞ x' 1/3ψ_0(x')ψ_n(x')lnx'^2)^ℓ'×
(a(t)H|r-r^'|)^-2 (n+n')m^2/3H^2 .
Truncating the series again at the leading order we obtain
⟨ζ_χ^ℓ( r)ζ_χ^ℓ'( r')⟩_ c =(√(2)/3)^ℓ+ℓ'(aH | r- r'|)^-4 m^2/3H^2+𝒪[(aH | r- r'|)^-8 m^2/3H^2] .
§ THE SPECTRUM OF Ζ ON CMB SCALES
We express the full solution Eq. (<ref>) for ζ in the form
ζ = ζ_ r + Z(ζ_χ,ζ_ r)
,
where the explicit expression for Z(ζ_χ,ζ_ r) can be read off from Eq. (<ref>).
The connected part of the two point function of ζ is given by
⟨ζ( r)ζ( r')⟩_ c=⟨ζ_ r( r)ζ_ r( r')⟩_ c+2⟨ Z( r)ζ_ r( r')⟩_ c+⟨ Z( r)Z( r')⟩_ c .
Expanding Z(ζ_χ-ζ_ r) around ζ_ r = 0, the last two terms can be written as
⟨ Z( r)ζ_ r( r')⟩_ c = ∑_n=0^∞1/(2n+1)!⟨ Z^(2n+1)(ζ_χ)⟩⟨ζ_ r( r)ζ_ r^2n+1
( r')⟩_ c
⟨ Z( r)Z( r')⟩_ c = ∑_n=1^∞∑_n'=1^∞1/n!n'!⟨ Z^(n)(ζ_χ)⟩⟨ Z^(n')(ζ_χ)⟩⟨ζ_ r^n( r)ζ_ r^n'( r')⟩_ c
+ terms involving ⟨ζ_χ^n( r)ζ_χ^n' ( r')⟩_ c .
We assume the curvature perturbation of the radiation component is sourced by the inflaton, and is Gaussian distributed with a nearly scale invariant spectrum
P_ζ_ r(k) = A_ r(k/k_*)^n_ r-1 , A_ r≫ P_ζ_χ(k_*) ,
and require that the curvaton component is suppressed on CMB scales such that P_ζ_χ(k_*) < 2× 10^-11 according to Eq. (<ref>). Using Eq. (<ref>), we then observe that on scales k ≲ k_* we can to leading precision drop all terms involving connected correlators of ζ_χ^n in Eq. (<ref>). On scales k ≲ k_*, the leading contribution to the two point function Eq. (<ref>) then reads
⟨ζ( r)ζ( r')⟩_ c≈⟨ζ_ r( r)ζ_ r( r')⟩_ c(1+⟨ Z^(1)(ζ_χ)⟩)^2 .
The non-linear function ⟨ Z^(1)(ζ_χ)⟩ involves contributions from closed ζ_χ loops attached to the points r or r'. Since ⟨ Z^(1)(ζ_χ)⟩ has no dependence on spatial coordinates, the spectrum of the curvature perturbation for k ≲ k_* is simply given by P_ζ(k) = A (k/k_*)^n_ r , A=A_ r(1+⟨ Z^(1)(ζ_χ)⟩)^2 , k≲ k_* .
Figure <ref> shows the ratio A/A_ r as function of Ω_χ. For comparison, we have also plotted the corresponding linear perturbation theory result A/A_ r = (4(1-Ω_χ)/(4-Ω))^2 obtained from ζ = 4ζ_ r(1-Ω_χ)/(4-Ω)+3ζ_χΩ_χ/(4-Ω), see e.g. <cit.>, when P_ζ_χ(k) ≪ P_ζ_ r(k). As seen in Fig. <ref>, for Ω_χ≲ 0.9 we have 0.2 ≲ A/A_ r⩽ 1, indicating that the observed CMB spectrum amplitude P_ζ(k_*)= A = 2.10 × 10^-9 is obtained by adjusting the inflaton sector to give A_ r in the range 5 ≳ A_ r⩾ 1, correspondingly. From Eqs. (<ref>) and (<ref>)–(<ref>) we further see that when Eq. (<ref>) holds, the contributions from ζ_χ to the spectral index n_ s-1= ln P_ζ/ ln k are parametrically suppressed to O(0.01). The measured spectral index n_ s= 0.965 at k = k_* should then be obtainable with slow roll inflationary models having ϵ = O(0.01) which can be adjusted to give the correct n_ s. We thus conclude that when Eq. (<ref>) holds, we obtain inflaton dominated, Gaussian and nearly scale-invariant perturbations on CMB scales.
JHEP-edit
|
http://arxiv.org/abs/2307.01945v1
|
20230704222817
|
Query-based Video Summarization with Pseudo Label Supervision
|
[
"Jia-Hong Huang",
"Luka Murn",
"Marta Mrak",
"Marcel Worring"
] |
cs.CV
|
[
"cs.CV",
"cs.AI",
"cs.IR"
] |
Instantaneous Wireless Robotic Node Localization Using Collaborative Direction of Arrival
Ehsan Latif and Ramviyas Parasuraman^*
School of Computing, University of Georgia, Athens, GA 30602, USA.
^* Corresponding Author Email: [email protected].
August 1, 2023
=================================================================================================================================================================
Existing datasets for manually labelled query-based video summarization are costly and thus small, limiting the performance of supervised deep video summarization models. Self-supervision can address the data sparsity challenge by using a pretext task and defining a method to
acquire extra data with pseudo labels to pre-train a supervised deep model. In this work, we introduce segment-level pseudo labels from input videos to properly model both the relationship between a pretext task and a target task, and the implicit relationship between the pseudo label and the human-defined label. The pseudo labels are generated based on existing human-defined frame-level labels. To create more accurate query-dependent video summaries, a semantics booster is proposed to generate context-aware query representations. Furthermore, we propose mutual attention to help capture the interactive information between visual and textual modalities. Three commonly-used video summarization benchmarks are used to thoroughly validate the proposed approach. Experimental results show that the proposed video summarization algorithm achieves state-of-the-art performance.
Query-based video summarization, semantics, self-supervision, weak supervision, pseudo labels
§ INTRODUCTION
Query-based video summarization automatically generates a short video clip to summarize the content of a given video by capturing its query-dependent parts, as shown in Fig. <ref>. Such a task can be modeled as a fully-supervised machine learning problem <cit.>. However, creating a large-scale manually-labeled video dataset for a fully-supervised task is costly. Hence, existing datasets, e.g., TVSum <cit.>, SumMe <cit.>, and QueryVS <cit.>, are quite small.
^⋆Work done during an internship at BBC Research and Development, London, UK.
The lack of larger human-annotated datasets is common in fully-supervised deep learning tasks. Self-supervised learning is one of the most successful ways to alleviate this challenge <cit.>. According to <cit.>, self-supervision is an effective method to balance the cost of data labelling and the performance gain of a fully-supervised deep model. The main idea of self-supervised learning is defining a pretext task and introducing a way to acquire extra data with reliable pseudo labels to pre-train a fully-supervised deep model for performing a target task <cit.>.
Existing self-supervision methods assume that the relation between a target task with human-defined labels and an introduced pretext task with pseudo labels does not exist or exists in a very limited way <cit.>. However, this assumption may not be accurate for query-based video summarization, where frame-level human-defined labels can be considered as supervision signals of a target task. Segment-level pseudo labels can be considered as supervision signals of a pretext task. Since a video segment is composed of frames, there is an implicit relation between the entire segment and the corresponding frames. The improvement in model performance can hit a bottleneck without modelling these implicit relations.
In this work, a segment-based video summarization pretext task with specially designed pseudo labels is introduced to address this challenge, detailed in Fig. <ref>. Pseudo labels are generated based on existing human-defined annotations, helping to model the implicit relations between the pretext task and the target task, i.e., frame-based video summarization <cit.>. In query-based video summarization, we observe that generating accurate query-dependent video summaries can be challenging in practice due to ineffective semantics embedding of textual queries. We address this issue by proposing a semantics booster that generates context-aware query representations which are capable of efficiently capturing the semantics. Furthermore, we noticed that the query input does not always help model performance, most likely due to the interactions between textual and visual modalities not being properly modelled. We address this challenge by introducing mutual attention that helps capture the interactive information between different modalities.
These novel design choices enable us to improve the model performance of query-based video summarization with self-supervision. Extensive experiments show that the proposed method is effective and achieves state-of-the-art performance. If we examine the problem from the perspective of frame-level label vs. segment-level label, the proposed method can also be considered as a weakly-supervised video summarization approach. Hence, existing weakly-supervised methods are also considered as baselines in this work.
§ RELATED WORK
§.§ Fully-supervised video summarization
Fully-supervised learning is a common way to model video summarization <cit.>. In fully-supervised video summarization, labels defined by human experts are used to supervise a model in the training phase.
In <cit.>, a video summarization approach is proposed to automatically summarize user videos that contain a set of interesting events. The authors start by dividing a video based on a superframe segmentation, tailored to raw videos. Then, various levels of features are used to predict the score of visual interestingness per superframe. Finally, a video summary is produced by selecting a set of superframes in an optimized way. In <cit.>, a Recurrent Neural Network (RNN) is used in a hierarchical way to model the temporal structure in video data. The authors of <cit.> consider video summarization as a problem of structured prediction. A deep-learning-based method is proposed to estimate the importance of video frames based on modelling their temporal dependency. The authors of <cit.> propose an importance propagation-based collaborative teaching network (iPTNet) for video summarization by transferring samples from a video moment localization correlated task equipped with a lot of training data. In <cit.>, the model learning process expands beyond solely utilizing visual inputs and incorporates an additional modality, such as viewers' comments, video captions, or any other contextual data available.
The aforementioned fully-supervised methods exploit a full set of human expert annotations to supervise the model in the training phase. Although such a method performs well, it is costly. Therefore, a better solution should be developed for video summarization.
§.§ Weakly-supervised video summarization
In <cit.>, video summarization is considered as a weakly-supervised learning task. Weakly-supervised learning can mitigate the need for extensive datasets with human expert annotations. Instead of using a full set of data with human expert labels, such as frame-level annotations, weakly-supervised approaches exploit less-expensive weak labels, such as video-level annotations from human experts. Although weak labels are imperfect compared to a full set of human expert annotations, they still can be used to train video summarization models effectively.
§.§ Self-supervision in video summarization
In <cit.>, image pretext tasks <cit.> are extended to video for self-supervision in video summarization. In <cit.>, the keyframes of a video are defined as those which are very different in their optical flow features and appearance from the rest of the frames of the video. The authors of <cit.> claim that a good video sequence encoder should have the ability to model the correct order of video segments. Segments are selected from a given video based on a fixed proportion before feeding it into a neural network. They are randomly shuffled and used to train the neural network and distinguish the odd-position segments to control the difficulty of the auxiliary self-supervision task.
Existing work related to self-supervision in video summarization is very limited, and they do not focus on query-based video summarization. To the best of our knowledge, our proposed method is one of the pioneer works of self-supervision in query-based video summarization.
§.§ Word embedding methods
According to <cit.>, static word embeddings and contextualized word representations are commonly used to encode textual data. Both of them are more effective than the Bag of Words (BoW) method. Skip-gram with negative sampling (SGNS) <cit.> and GloVe <cit.> are well-known models for generating static word embeddings. According to <cit.>, these models learn word embeddings iteratively in practice. However, it has been proven that both of them implicitly factorize a word-context matrix containing a co-occurrence statistic.
The authors of <cit.> mention that in static word embeddings methods, all meanings of a polysemous word must share a single vector because a single representation for each word is created. Hence, the contextualized word representations method is more effective than static word embeddings because of its context-sensitive word representations. In <cit.>, the proposed neural language models are fine-tuned to create deep learning-based models for a wide range of downstream natural language processing tasks.
In this work, a contextualized word representation-based method is used to encode the text-based input query.
§ METHODOLOGY
In this section, the proposed query-based video summarization
method is described in detail, and illustrated in Fig. <ref>. The approach is based on contextualized query representations, attentive convolutional 2D and 3D features, interactive attention mechanism, mean-based pseudo shot label generation, and video summary generation.
§.§ Semantics Booster
Generating an accurate query-dependent video summary is challenging because of the ineffective semantics embedding of input textual queries. In this work, a semantics booster is introduced to capture the semantics of the input query effectively. The transformer-based model architecture has been firmly established as one of the state-of-the-art approaches in language modeling and machine translation <cit.>. Hence, the proposed semantics booster is built on top of the transformer architecture to generate context-aware query representations, described as follows.
For an input token k_n, its embedding x_n is defined as: x_n = W_e*k_n+P_k_n, n ∈{1,...,N}, where W_e ∈ℝ^E_s × V_s is the input text-based query token embedding matrix with the vocabulary size V_s and the word embedding size E_s, the positional encoding of k_n is P_k_n, and N denotes the number of input tokens. The subscripts s and e denote size and embedding, respectively. The representation of the current word Q is generated by one linear layer defined as: Q = W_q*x_n+b_q, where b_q and W_q ∈ℝ^H_s × E_s are learnable parameters of the linear layer, the output size of the linear layer is H_s and the subscript q denotes query. The key vector K is calculated by the other linear layer defined as: K = W_k*x_n+b_k, where b_k and W_k ∈ℝ^H_s × E_s are learnable parameters of the linear layer. The subscript k denotes key. The value vector V is generated by another linear layer defined as: V = W_v*x_n+b_v, where b_v and W_v ∈ℝ^H_s × E_s are learnable parameters of the linear layer. The subscript v denotes value.
After Q, K, and V are calculated, the masked self-attention is generated as: (Q,K,V) = (m(QK^T/√(d_k)))V, where m(·) and d_k denote a masked self-attention function and a scaling factor, respectively. The layer normalization is calculated as: Z_ = ((Q,K,V)), where (·) denotes a layer normalization function. Then, the introduced context-aware representation ℛ_ of the input text-based query is derived as: ℛ_ = σ(W_1Z_+b_1)W_2+b_2, where σ is an activation function, W_1, W_2, b_1, and b_2 are learnable parameters of a position-wise feed-forward network. To have even better textual representations, a textual attention function (·) is introduced to reinforce the context-aware representation. The function takes ℛ_ as input and calculates the attention and textual representation in an element-wise way. The attentive context-aware representation is calculated as Z_ta = (ℛ_), where ta indicates textual attention.
§.§ Visual Attention
A 2D ConvNet and a 3D ConvNet are exploited to distill the video frame and video segment information, respectively. To reinforce the generated 2D and 3D features, a visual attention function (·) is introduced to improve the quality of features.
Let E and X be a feature generator and a set of video clips, respectively. A feature generator E maps an input x ∈ X to a feature vector f ∈ℝ^d. F={f=E(x) ∈ℝ^d | x ∈ X} denotes a set of features produced by the feature generator E. Let F_s be the generated features from the video spatial feature generator E_s. F_st denotes the generated features from the video spatio-temporal feature generator E_st. Frame-level and segment-level data both are exploited to train the proposed query-based video summarization model, meaning F = F_s∪ F_st. In the frame-level case, the attentive feature generator (·) learns attention weights and produces attentive spatial features Z_as={f_as=(f) ∈ℝ^d | f ∈ F_s}, i.e., attentive convolutional 2D features. In the segment-level case, the attentive feature generator learns attention weights and produces attentive spatio-temporal features Z_ast={f_ast=(f) ∈ℝ^d | f ∈ F_st}, i.e., attentive convolutional 3D features.
§.§ Mutual Attention
We observe that textual queries do not always help the model performance due to the interactions between the video and query inputs not being modelled effectively. In this work, a mutual attention mechanism (·) is introduced to address this issue and model the interactive information between the video and query. The mutual attention Z_ma performs one by one convolution, i.e., convolutional attention. Z_ma = (Z_ta⊙ Z_as⊙ Z_ast), where Z_ta indicates textual attention and ⊙ denotes Hadamard product.
§.§ Pseudo Segment-level Label Generation
Let S_f be a set of human experts' frame-level score annotations and P a pseudo score annotation generator that maps frame-level human expert scores to a segment-level pseudo score.
In <cit.>, the authors empirically find that a two-second segment is suitable for capturing local context of a video as it achieves good visual coherence. Based on this observation, in this work the proposed pseudo label generator P is designed to generate a segment-level score every two seconds. In practice, since the generated pseudo score annotations are not validated by human experts, they might contain noisy or biased information. Based on <cit.>, the function is one of the effective ways to reduce the noise contained in the segment-level pseudo label. Hence, function is used to design the proposed pseudo label generator P to produce the mean score S_=P(S_f)=(S_f), i.e., the two-second segment-level pseudo score label. In the training phase, compared with the frame-level label, the mean-based pseudo segment label S_ is used not only for spatial supervision but also for temporal supervision. The temporal supervision with the segment-level pseudo annotations improves the query-based video summarization model performance.
§.§ Loss Function
According to <cit.>, query-based video summarization can be modeled as a classification problem. Thus, in this work, the categorical cross-entropy loss function is adopted to build the proposed approach:
= -1/N∑_i=1^N∑_c=1^C1_y_i∈ C_c(P_ [y_i∈ C_c ]),
where N indicates the number of observations, C denotes the number of categories, 1_y_i ∈ C_c is an indicator function of the i-th observation belonging to the c-th category, and P_[y_i ∈ C_c] is the probability predicted by the model for the i-th observation to belong to the c-th category.
§ EXPERIMENTS AND ANALYSIS
§.§ Datasets and evaluation metrics
Datasets. TVSum <cit.> is a commonly used dataset for traditional video summarization, containing only the video input. However, authors of <cit.> consider TVSum metadata, e.g., video title, as a text-based query input to generate the query-dependent video summary. In our experiments, the TVSum dataset is randomly divided into 40/5/5 videos for training/validation/testing, respectively. The video length is ranging from 2 to 10 minutes. The human expert score labels range from 1 to 5, and are annotated with 20 frame-level responses per video <cit.>.
The SumMe <cit.> dataset is randomly divided into 19 videos for training, 3 videos for validation, and 3 videos for testing. The video duration in SumMe is ranging from 1 to 6 minutes. In SumMe, the human expert annotation score ranges from 0 to 1. SumMe is not used for query-based video summarization and we do not have a query input when a model is evaluated on this dataset.
QueryVS <cit.> is an existing dataset designed for query-based video summarization. In our experiments, the QueryVS dataset is separated into 114/38/38 videos for training/validation/testing, respectively. The video length in QueryVS is ranging from 2 to 3 minutes, and every video is retrieved based on a given text-based query.
To validate the proposed query-based video summarization method, three segment-level datasets are created based on the above frame-level datasets. Both the segment-level dataset, i.e., for pre-training, and the frame-level dataset, i.e., the target dataset, are used to conduct our experiments.
Evaluation metric.
Based on <cit.>, the F_β-score with the hyper-parameter β=1 is a commonly used metric for assessing the performance of supervised video summarization approaches. It is based on measuring the agreement between the predicted score and ground truth score provided by the human expert. The F_β-score is defined as: F_β=1/N∑_i=1^N(1+β ^2)× p_i× r_i/(β ^2× p_i)+r_i, where r_i indicates i-th recall, p_i indicates i-th precision, N indicates number of (r_i, p_i) pairs, “×” denotes scalar product, and β is used to balance the relative importance between recall and precision.
§.§ Experimental settings
In the experiments, a 2D ResNet-34 network pre-trained on the ImageNet database <cit.> is adopted to generate frame-level features for each input video. The 512 features are extracted from the visual layer one layer below the classification layer. A 3D ResNet-34 pre-trained on the Kinetics benchmark <cit.> is used in the experiments to generate segment-level features for each input video. The features with 512 dimensions are located in the visual layer which is right after the global average pooling layer.
The video lengths in the SumMe, TVSum and QueryVS datasets vary, with the maximum number of frames in a video being 388 for SumMe, 199 for QueryVS, and 647 for TVSum. A frame-repeating preprocessing technique <cit.> is followed to make all the videos in each dataset the same length.
The input size of the CNN is 224 by 224 with RGB channels. Every channel is normalized by standard deviation =(0.2737, 0.2631, 0.2601) and mean =(0.4280, 0.4106, 0.3589). PyTorch is used for the implementation and to train models for 100 epochs with 1e-7 learning rate. The Adam optimizer is used, with hyper-parameters set as ϵ=1e-8, β_1=0.9, and β_2=0.999.
§.§ Ablation Study
The ablation study of the proposed method is presented in Table <ref>. The baseline model without the mutual attention mechanism and pseudo segment label pre-training and no semantics booster performs significantly worse than approaches utilising any or all of the proposed improvements. Note that when the semantics booster is not adopted, the BoW embedding method is used.
The mutual attention mechanism helps capture the interaction between the input query and video more effectively. The pseudo segment-level label pre-training helps the proposed model have better initialization. The semantics booster captures the semantic meaning of the text-based query.
§.§ Comparison with state-of-the-art models
The comparison with existing fully-supervised, weakly-supervised and query-based approaches is presented in Table <ref>. The results show the performance of our proposed method is the best on TVSum and QueryVS datasets, with a competitive performance on the SumMe dataset.
The correctness of the generated segment-level pseudo labels is not guaranteed by human experts, but it still contains useful information, e.g., better temporal information, to supervise the proposed model during pre-training. In weakly-supervised methods, although the correctness of the coarse labels, e.g., video-level label, is guaranteed by human experts, it is still not good enough to boost the model performance better than our proposed method. In query-based summarization methods, although the other modality is used to help the model performance, the effectiveness of the multi-modal feature fusion could limit the performance improvement.
Randomly selected qualitative results are shown in Fig. <ref>.
§ CONCLUSION
In this work, a new query-based video summarization approach is proposed. The method is based on the self-supervision of segment-level pseudo scores, semantics booster, and a mutual attention mechanism. Additionally, three segment-level video summarization datasets for self-supervision are proposed based on existing small-scale query-based video summarization datasets. Experimental results show the mean-based segment-level pseudo labels provide effective temporal supervision. The proposed approach achieves state-of-the-art performance in terms of the F_1-score.
Nowadays, video content is growing at an ever-increasing speed and beyond the capacity of an individual for full comprehension. In such cases, the proposed query-based video summarization method has the potential to improve the efficiency of video exploration.
§ ACKNOWLEDGMENTS
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 765140.
IEEEbib
|
http://arxiv.org/abs/2307.03239v1
|
20230706180620
|
Hyperbolic polynomials and starved polytopes
|
[
"Arne Lien"
] |
math.AG
|
[
"math.AG",
"math.CO",
"math.RT",
"14P10, 20C30, 05E14"
] |
That’s BAD: Blind Anomaly Detection by Implicit Local Feature Clustering
Jie Zhang^1
Masanori Suganuma^1
Takayuki Okatani^1,2
^1Graduate School of Information Sciences, Tohoku University ^2RIKEN Center for AIP
{jzhang,suganuma,okatani}@vision.is.tohoku.ac.jp
===================================================================================================================================================================================================================
We study sets of univariate hyperbolic polynomials that share the same first few coefficients and show that they have a natural combinatorial description akin to that of polytopes. We define a stratification of such sets in terms of root arrangements of hyperbolic polynomials and show that any stratum is either empty, a point or of maximal dimension and in the latter case we characterise its relative interior. This is used to show that the poset of strata is a graded, atomic and coatomic lattice and to provide an algorithm for computing which root arrangements are realised in such sets of hyperbolic polynomials.
The topic of this article is the study of sets of univariate hyperbolic polynomials that share the same first few coefficients. The motivation for studying such sets stems from the article <cit.>, where they were used to provide a new proof of Timofte's degree and half degree principle for symmetric polynomials. Therefore the sets are deeply connected with the study of multivariate symmetric polynomials.
A natural way of describing the root arrangement of a hyperbolic polynomial is to construct its partition of multiplicities. However, by also considering in which order the roots arise we get a finer description of the polynomials root arrangement which we call its composition. We shall see that by using compositions to stratify the set of hyperbolic polynomials, we get a lattice of strata that is graded, atomic and coatomic.
To establish these combinatorial properties of the set of strata we show that the strata are connected to a type of symmetric, real algebraic set called Vandermonde varieties. This connection allows us to show that the relative interior of a stratum consists of the polynomials with the largest number of distinct roots and that a stratum is either empty, a point or of the generic dimension of a nonempty Vandermonde variety.
We begin in Section <ref> by introducing the sets that we are studying and defining our stratification. We look into the following example of a set of degree 5 hyperbolic polynomials with the same first three coefficients:
< g r a p h i c s >
.
Then we show in Proposition <ref> that the collection of strata, partially ordered by inclusion, is a lattice.
In Section <ref> we show that a result from <cit.> on Vandermonde varieties can be used to describe the relative interior of the strata (Theorem <ref>) and to show that the strata are either empty, a point or of maximal dimension (Theorem <ref>).
These results are then used in Section <ref> to establish Theorem <ref>, which says that the poset of strata is a graded, atomic and coatomic lattice. Finally this leads us to Algorithm <ref> that determines which compositions occur in our sets.
We finish with Section <ref> where we discuss how a result from <cit.> on the discriminant and subdiscriminants implies that the boundary of the sets of hyperbolic polynomials have a concave-like property. In combination with Theorem <ref>, this leads to the description "starved polytopes".
Acknowledgements. I am very grateful to Claus Scheiderer for all the helpful discussions, critiques and advice along the way. I am also very grateful to Cordian Riener for bringing to my attention to some of the literature on the topic and for greatly simplifying the argument behind Theorem <ref>. Lastly I would also like to thank Tobias Metzlaff whose suggestion led to the pretty picture above.
§ STRATIFICATION
We start off by defining the sets of univariate hyperbolic polynomials and showing how we stratify these sets. Then we look closer at the example from the introduction. We finish this section by showing that the collection of strata, partially ordered by inclusion, is a lattice.
A univariate polynomial f∈ℝ[t] is called hyperbolic if all its roots are real.
Given a hyperbolic polynomial f of degree d, we will denote by H_s(f) the set of all other hyperbolic polynomials of degree d with the same s+1 first coefficients as f. That is, if f=f_0t^d+f_1t^d-1+...+f_d is hyperbolic, then
H_s(f):= {h∈ℝ[t]|h is hyperbolic and h_i =f_i ∀ i ≤ s}.
If a=(a_1,...,a_d) are the roots of f then it is well known that f_i = (-1)^ie_i(a), where e_i is the i^th elementary symmetric polynomial in d variables. Thus the subscript s refers to the number of fixed elementary symmetric polynomials.
We will let f be some monic, hyperbolic polynomial of degree d≥ 1 throughout this article. Also, we will refer to H_s(f) as a starved polytope although we have yet to establish this as a good description.
Often a polynomial h=t^d+h_1t^d-1+...+h_d∈ H_s(f) will be identified with the point (h_s+1,...,h_d)∈ℝ^d-s without specifying this change of basis. Thus when we consider topological questions, we will be equipping H_s(f) with the subspace topology of the Euclidean topology on ℝ^d-s.
To stratify the starved polytopes we introduce compositions and a corresponding partial order.
A composition of d is a tuple of positive integers,
u =(u_1,...,u_l), with
∑_i=1^lu_i=d.
The integers u_i are called the parts of u and ℓ(u):=l the length of u.
We often use the shorthand (1^d) for the composition whose parts are all equal to 1. Also, when nothing further is specified, we will let u denote a composition of d.
Let u and v be two compositions of a positive integer d. Then v < u if v can be obtained from u by replacing some of the commas in u with plus signs.
To any hyperbolic polynomial f of degree d, we associate a composition of d the following way: let a_1<a_2...<a_l, be the distinct roots of f with respective multiplicities m_1,...,m_l. Then the composition of f is the tuple v(f) := (m_1,...,m_l), which is a composition of d.
The strata we will look at are given by
H_s^u(f):={h∈ H_s(f)|v(h)≤ u}.
Note that H^(1^d)_s(f) = H_s(f) since any composition of d is smaller than (1^d).
Let d=5 and s=2 and let
f=(t+π)(t+√(2))t(t-1,23456789123456789)(t-e),
then if we map the last three coefficients of the polynomials in H_2(f), we get the three dimensional picture from the introduction.
< g r a p h i c s >
The polynomials with no repeated roots make up the interior and the other compositions lies on the pieces of the boundary as indicated by the picture. For instance, if u=(2,1,2), then the compositions (2,3) and (3,2) are the compositions in the picture which is smaller than u. Thus H_2^u(f) correspond to the closest one dimensional piece of the boundary (including its endpoints).
The polynomial in Example <ref> was chosen to give a fair representation of H_2(f) for degree 5 and its roots was chosen to illustrate that the resulting set is not dependant on the roots being particularly "nice". However, the roots where grouped relatively close together to get a picture that is suitable for an A4 page and not one that is too stretched or skewed.
Next we show that the poset of strata is a lattice. If a and b are elements of a lattice, we denote their join by a∨ b and their meet by a∧ b.
The poset of compositions of d is a lattice.
We prove this by explicitly constructing the join of two compositions of d, u and v. Having done so, the existence and uniqueness of the meet is also established since the meet of u and v is just the join of all compositions smaller than both u and v.
If a and b are two compositions of d, note that a≤ b if and only if
{a_1,a_1+a_2,...,d}⊆{b_1,b_1+b_2,...,d}.
So let M={u_1,u_1+u_2,...,d}∪{v_1,v_1+v_2,...,d} and construct the tuple m=(m_1,m_2,...,m_l) containing all the distinct elements of M and where m_i<m_i+1 for any i∈ [l-1] (note that m_l=d).
Next we construct the composition w=(m_1,m_2-m_1,m_3-m_2,...,m_l-m_l-1). This is naturally a composition of d and since {u_1,u_1+u_2,...,d}⊆ M and {v_1,v_1+v_2,...,d}⊂ M, w is greater than both u and v. Also, since M is by construction the unique minimal set that contains both {u_1,u_1+u_2,...,d} and {v_1,v_1+v_2,...,d}, then w is the join of u and v.
From Lemma <ref> we immediately get that the set of strata of H_s(f), partially ordered by inclusion, form a lattice. To see this let us determine the meet of two faces of H_s(f), H^u_s(f) and H_s^v(f).
The meet of two strata must be contained in their intersection since the partial order is given by inclusion. Also, by definition the of the strata we have
H^u_s(f)∩ H_s^v(f)=H_s^u∧ v(f),
so the intersection is a stratum of H_s(f). Thus we have that
H^u_s(f)∧ H_s^v(f)=H_s^u∧ v(f)
and we have shown the following:
The set of strata of H_s(f), partially ordered by inclusion, is a lattice.
However, it is worth pointing out that just because the meet of H^u_s(f) and H_s^v(f) is H_s^u∧ v(f), this does not a priori mean that u∧ v is the only composition, w, such that H^u_s(f)∧ H_s^v(f)=H_s^w(f). For instance if H_s^u(f) is empty, then H^u_s(f)∧ H_s^v(f)=H_s^u(f) even if u∧ v ≠ u. So different compositions may label the same stratum and this does indeed happen sometimes.
A similar problem can arise when we consider joins of strata. However in this case, we shall see in Section <ref> that this can be overcome by requiring u and v to be the minimal compositions that can label the strata H_s^u(f) and H_s^v(f).
§ VANDERMONDE VARIETIES
To describe the combinatorial structure of the lattice of strata we need to establish some geometric properties of our strata. In particular we will show that Arnold's, Givental's and Kostov's work on so called "Vandermonde varieties" (see <cit.>, <cit.> and <cit.>) implies that H_s^u(f) is either empty, a point or of dimension ℓ(u)-s and that in the latter case the polynomials with composition u make up the relative interior.
To see the connection to their work, let the symmetric group ([d]) act on ℝ[x_1,...,x_d] by permuting the variables. Then it is well known that the elementary symmetric polynomials generate the ring of invariants (see Chapter 7.1 in <cit.>). The induced action on ℝ^d permutes the coordinates of the points in ℝ^d and the orbit space ℝ^d/([d]) can be identified with the image of ℝ^d under the mapping Π:ℝ^d→ℝ^d, where
Π(x)=t^d-e_1(x)t^d-1+...+(-1)^de_d(x)
and e_i(x) is the i^th elementary symmetric polynomial.
Thus the orbit space can be identified with the monic, hyperbolic polynomials of degree d and by restricting Π to the set K_d:={x∈ℝ^d|x_1≤...≤ x_d}, we obtain a bijection between K_d and Π(ℝ^d). So we see that the starved polytopes can be thought of as sections of the orbit space obtained by intersecting it with certain affine hyperplanes.
Similarly, if u=(u_1,...,u_l) and h∈ H_s^u(f) has the roots a_1≤ ... ≤ a_l, then
h(t) = t^d-e_1(a_u)t^d-1+e_2(a_u)t^d-2...+(-1)^de_d(a_u),
where a_u:=(a_1,...,a_1,...,a_l,...,a_l) and a_i is repeated u_i times. So if we define the real algebraic set
V^u_s(f):={x∈ℝ^l|e_i(x_u)=(-1)^if_i ∀ i∈ [s]},
then we see that H_s^u(f) is the image of V^u_s(f)∩ K_l under the mapping
Π_u(x):= t^d-e_1(x_u)t^d-1+...+(-1)^de_d(x_u).
Another set of generators for the invariant ring ℝ[x_1,...,x_d]^([d]) is the power sums p_i(x):=∑_j∈ [d]x_j^i, where i∈ [d]. And since the first s power sums can be expressed as polynomials in the first s elementary symmetric polynomials using Newton's identities, we can equivalently define V_s^u(f) as {x∈ℝ^l|p_i(x_u)=c_i ∀ i∈ [s]}, where c_1,...,c_s∈ℝ are obtained from -f_1,...,(-1)^sf_s using Newton's identities. Such sets are referred to as Vandermonde varieties since the Jacobian of the first s power sums is a constant multiple of a Vandermonde determinant.
To make use of the previous work on Vandermonde varieties, we first need to establish that Π_u is a homeomorphism. Note that, as with H_s(f), we will be equipping the sets V_s^u(f), K_l and H_s^u(f) with the subspace topology of the Euclidean topology.
Before we proceed, let B_ϵ(a) denote the real open ball about a∈ℝ^k, of radius ϵ > 0, and let B_ϵ(a) denote its closure. Similarly, let D_ϵ(z) denote the complex open ball about z∈ℂ^k, of radius ϵ, and let D_ϵ(z) denote its closure.
Let ℓ(u)=l, then H_s^u(f) is closed in ℝ^d-s and
Π_u:V^u_s(f)∩ K_l→ H_s^u(f)
is a homeomorphism.
Note that Π_u is a bijection and a polynomial mapping, thus it is a continuous bijection. To see that the inverse map is continuous and that H_s^u(f) is closed in ℝ^d-s, we show that the image of closed sets in V^u_s(f)∩ K_l are closed in ℝ^d-s.
Let S be a closed subset of V^u_s(f)∩ K_l, then since V^u_s(f) and K_l are closed in ℝ^l, so is S. Let
ι_u:V^u_s(f)∩ K_l→ℂ^d
be the inclusion (x_1,...,x_l) ↦ x_u=(x_1,...,x_1,...,x_l,...,x_l), where x_i is repeated u_i times. Then Π_u(x) = (Π∘ι_u)(x) and clearly ι_u(S) is a closed subset of ℂ^d.
Let h= t^d+h_1t^d-1+...+h_d∉Π_u(S) have the roots a=(a_1,...,a_d) and let h_i=f_i ∀ i∈ [s]. Let ϵ>0 be such that D_ϵ(σ(a))∩ι_u(S) is empty for any σ∈([d]). If b_1,...,b_k are the distinct roots of h with respective multiplicities v_1,..,v_k, then there is a δ>0 such that any polynomial, g, of degree d, with |h_i-g_i|≤δ for all i∈ [d] has exactly v_i zeroes in D_ϵ(b_i).
The proof of this statement can be found in <cit.>. Thus, since D_ϵ(σ(a))∩ι_u(S) is empty for any σ∈([d]), then so is B_δ(h)∩Π_u(S) and therefore is Π_u(S) closed in ℝ^d-s.
The sets V^u_s(f)∩ K_l and H_s^u(f) are contractible or empty.
Firstly, we can use Newton's identities to define V^u_s(f) in terms of the first s power sums in d variables. Then the proof that V^u_s(f)∩ K_l is contractible or empty can be found in <cit.> (Theorem 1.1). By Lemma <ref> the map Π_u : V^u_s(f)∩ K_l → H_s^u(f) is a homeomorphism, thus H_s^u(f) is contractible if it is nonempty.
To see how this proposition can be used to further describe our strata we need some more definitions. First note that as H_s^u(f) is the image of a semi algebraic set under a polynomial mapping, it is semi algebraic. Thus the dimension of the stratum is the maximum integer, n, such that H_s^u(f) contains an open set which is homeomorphic to an open set of ℝ^n (see Chapter 2.8 of <cit.>).
If H_s^u(f) is a nonempty stratum and n=(H_s^u(f)), then
* the relative interior of H_s^u(f) is the set of polynomials h∈ H_s^u(f) such that an open neighbourhood of h is homeomorphic to an open set in ℝ^n and
* the relative boundary of H_s^u(f) is the set of polynomials in H_s^u(f) which is not in the relative interior.
We can use Proposition <ref> to give a description of the relative interior and relative boundary of our strata and also determine their dimension. But the first consequence of the proposition that we need is the following:
If ℓ(u)≤ s, then H_s^u(f) contains at most one polynomial.
Suppose h∈ H_s^u(f) has the distinct roots a=(a_1,...,a_k) and composition v=(v_1,...,v_k), then k≤ℓ(u)≤ s. If k=1, then v=(d) and there is only one solution to the equation
-e_1(x_v)=-dx_1=f_1.
And so we have H_s^v(f)=H_k^v(f) = {h}. If k>1, then as previously mentioned, V_s^v(f) can be defined as
{x∈ℝ^k| p_1(x_u) = c_1,...,p_k(x_u) = c_k},
where p_i is the i^th power sum in d variables and c_1,...,c_k∈ℝ are obtained from the numbers -f_1,...,(-1)^kf_k using Newton's identities.
The map F:ℝ^k→ℝ^k, where F(x)=(p_1(x_v),...,p_k(x_v)), is a continuously differentiable function whose Jacobian matrix is (iv_jx_j^i-1)_i,j ≤ k and so its determinant is
∏_i=1^k iv_i ∏_1≤ j< r≤ k (x_j-x_r).
Since all the a_i's are distinct, the determinant is nonzero at a. Thus the Jacobian matrix is invertible and by the inverse function theorem, F is invertible on some neighbourhood U of F(a) = (c_1,...,c_k). By Proposition <ref>, V_k^v(f)∩ K_k is contractible and since a is isolated in this set, it must be the only point there. Therefore we have H_s^v(f)=H_k^v(f) = {h}.
So for any composition w≤ u, that occurs in H_s^u(f), we have that H_s^w(f) is a point. Since there are finitely many compositions smaller than or equal to u, H_s^u(f) contains finitely many points. But since H_s^u(f) is contractible it can contain at most one point.
We will let
P^r:ℝ^k→ℝ^k-r
denote projection that forgets the last r coordinates. This map will help us describe the relative interior.
If l=ℓ(u)> s, then the map
P^d-l:H_s^u(f) →ℝ^l-s
is a homeomorphism onto its image and the image is closed in ℝ^l-s.
Firstly we consider the case when l=1, thus u=(d) and s=0. So for any a∈ℝ we have that (t-a)^d= t^d-dat^d-1+...+(-a)^d∈ H_0^u(f). Thus P^d-1((t-a)^d) = t^d-dat^d-1 and so the map
P^d-l∘Π_u:ℝ→ℝ
is essentially just mapping a to -da. This is naturally a homeomorphism and since, by Lemma <ref>, Π_u is a homeomorphism, then so is P^d-l. Lastly, since the image of P^d-l is all of ℝ, the image is closed in ℝ.
Next suppose l≥ 2. By Lemma <ref>, the polynomials of H_s^u(h) are uniquely determined by their first l coefficients, thus P^d-l is bijection between H_s^u(f) and P^d-l(H_s^u(f)). Also, the topology on H_s^u(f) is the subspace topology of the product topology on ℝ^d-s with respect to the projections on each coordinate. Thus, by definition, the map P^d-l is continuous.
To see that the inverse is continuous and that P^d-l(H_s^u(f)) is closed we will show that the image of closed subsets of H_s^u(f) are closed in ℝ^l-s. So let S be a closed subset of H_s^u(f), then S is also a closed subset of ℝ^d-s since, by Lemma <ref>, H_s^u(f) is closed in ℝ^d-s.
Let g be a point in the closure of P^d-l(S). Then for any ϵ>0, the closed ball B_ϵ(g) meets P^d-l(S) and so the inverse image
(P^d-l)^-1(B_ϵ(g)∩ P^d-l(S)) =(B_ϵ(g)×ℝ^d-l)∩ S
is nonempty. It is also closed since B_ϵ(g)×ℝ^d-l and S are closed in ℝ^d-s.
Since Π_u is a homeomorphism, then
M=Π_u^-1((B_ϵ(g)×ℝ^d-l)∩ S)
is a closed subset of V_s^u(f)∩ K_l and since V_s^u(f)∩ K_l is closed in ℝ^l, then so is M. We also have that
M⊆Π_u^-1(S)∩{x∈ K_l| g_i-ϵ≤ (-1)^ie_i(x_u)≤ g_i+ϵ ∀ i∈ [l]}.
So if a∈ M, then e_1(a_u)≤ g_1-ϵ and e_2(a_u)≥ g_2-ϵ since l≥ 2. Thus, by Newton's identities, we have
p_2(a_u) = e_1^2(a_u)-2e_2(a_u) ≤ g_1^2-2g_1ϵ+ϵ^2-2g_2+2ϵ
and so M is bounded. Since M is closed and bounded, it is compact.
Since Π_u and P^d-l are continuous, P^d-l∘Π_u is continuous. And since the continuous image of a compact set is compact, we have that (P^d-l∘Π_u)(M) is compact. Thus
g∈ (P^d-l∘Π_u)(M) ⊆ P^d-l(S)
and so P^d-l(S) is closed in ℝ^l-s. Therefore P^d-l is a closed map and thus P^d-l is a homeomorphism. Lastly, by setting S=H_s^u(f), we see that P^d-l(H_s^u(f)) is closed in ℝ^l-s.
We see from Lemma <ref> that when ℓ(u)≥ s, the largest dimension that H_s^u(f) can have is ℓ(u)-s. Thus we say that the maximal dimension of H_s^u(f) is max{ℓ(u)-s,0}.
If H_s^u(f) contains a polynomial with composition u, then H_s^u(f) is maximal dimensional and its relative interior consists of the polynomials with composition u.
If s=0 and l=ℓ(u), the map P^d-l∘Π_u:K_l→ P^d-l(H_0^u(f)) is a homeomorphism by Lemma <ref> and Lemma <ref>. So since the dimension of K_l is l and its interior are the points with no repeated coordinates, then the dimension of P^d-l(H_0^u(f))⊆ℝ^l is l and its interior are the polynomials with composition u.
Next suppose s>0 and let A^r_f_i denote the affine hyperplane of ℝ^r defined by fixing the i^th coordinate to be equal to f_i. Then
P^d-l(H_s^u(f)) = P^d-l(H_0^u(f)∩ A^d_f_1∩ ... ∩ A^d_f_s)=
P^d-l(H_0^u(f))∩ A^l_f_1∩ ... ∩ A^l_f_s.
Thus if there is a polynomial h∈ H_s^u(f) with composition u, then P^d-l(h) lies in the interior of P^d-l(H_0^u(f)) and therefore also in the interior of P^d-l(H_s^u(f)) and P^d-l(H_s^u(f)) must be of dimension max{l-s,0}. Since P^l-s is a homeomorphism, H_s^u(f) is maximal dimensional and h is in its relative interior.
For the reverse inclusion, suppose l>s so that H_s^u(f) is at least one dimensional. If its relative interior contains a polynomial g with v(g)<u, then P^d-l(g) lies in the interior of the (l-s)-dimensional set P^d-l(H_s^u(f))⊆ℝ^l-s. Thus P^d-l(g) lies in the interior of the one dimensional set P^d-l(H_l-1^u(g)).
By Lemma <ref>, there are finitely many polynomials in H_l-1^u(g) with a smaller composition than u. Thus there are two polynomials p_- and p_+ in H^u_l-1(g), with composition u, and a δ>0 such that
P^d-l(p_-)=P^d-l(g)-δ and P^d-l(p_+)=P^d-l(g)+δ.
Since P^d-l(p_-) and P^d-l(p_+) are interior points of P^d-l(H^u_0(f)), there is an ϵ>0 such that B_ϵ(P^d-l(p_-)) and B_ϵ(P^d-l(p_+)) are contained in the interior of P^d-l(H^u_0(f)). And since P^d-l(g) is in the boundary of P^d-l(H^u_0(f)), the ball B_ϵ(P^d-l(g)) must contain a polynomial q = t^d+q_1t^d-1+...+q_lt^d-l that is not in P^d-l(H^u_0(f)).
Thus A^l_q_1∩ ...∩ A^l_q_l-1 is a line that passes through q and the two balls B_ϵ(P^d-l(p_-)) and B_ϵ(P^d-l(p_+)). But if q separates the nonempty sets
B_ϵ(P^d-l(p_-))∩ A^l_q_1∩ ...∩ A^l_q_l-1
and
B_ϵ(P^d-l(p_+))∩ A^l_q_1∩ ...∩ A^l_q_l-1,
then
P^d-l(H^u_0(f))∩ A^l_q_1∩ ...∩ A^l_q_l-1=P^d-l(H^u_l-q1(p_+))⊂ℝ
is nonempty but not contractible. This contradicts Proposition <ref> and g can therefore not be in the relative interior of H_s^u(f).
Thus, if ℓ(u)>s, the stratum H_s^u(f) is of maximal dimension if and only if it contains a polynomial with composition u. We can use this observation to determine all the possibilities for the dimension of H_s^u(f). It is worth mentioning that a similar observation can be found in <cit.>, Proposition 5.
If ℓ(u)>s and H_s^u(f) contains a polynomial with at least s distinct roots, then H_s^u(f) is maximal dimensional. If not, then H_s^u(f) is either empty or a single polynomial.
If s=0, then any composition occurs and thus by Theorem <ref>, any stratum is maximal dimensional. Similarly, if s=1, then for any composition u, the polynomial -e_1(x_u)-f_1 has a real zero with l=ℓ(u) distinct coordinates ordered increasingly. To see this pick l real numbers a_1,...,a_l such that a_1/u_1<....<a_l/u_l and let
a=-f_1/∑_i a_i(a_1/u_1,...,a_l/u_l),
then
-e_1(a_u) = ∑_j u_jf_1/∑_i a_ia_j/u_j = f_1/∑_i a_i∑_ja_j = f_1.
Thus the composition u occurs and H_s^u(f) is of maximal dimension.
Next we suppose s≥ 2. If ℓ(u)≤ s or H_s^u(f) does not contain a polynomial with at least s distinct roots, then H_s^u(f) = ∪_w≤ u|ℓ(w)=s-1H_s^w(f). By Lemma <ref>, H_s^w(f) contains at most one point when ℓ(w)= s-1. Since there are finitely many compositions of length s-1, then H_s^u(f) contains finitely many polynomials and since H_s^u(f) is contractible it contains at most one polynomial.
So suppose h∈ H_s^u(f) has k≥ s distinct roots and v(h)< u. Let v≤ u be a composition that covers v(h). Then ℓ(v) = k+1 and so by Proposition <ref>, H_k^v(h) is at most one dimensional. Since v(h)<v we can write h as ∏_i=1^k+1 (t-a_i)^v_i and without loss of generality we may assume that a_1<...<a_k=a_k+1.
As in the proof of Lemma <ref>, if we define V_k^v(f) as {x∈ℝ^k+1| p_1(x_u) = c_1,...,p_k(x_u) = c_k}, then the Jacobian matrix of the defining polynomials is (iv_jx_j^i-1)_i≤ k,j≤ k+1 and the determinant of the leftmost k× k submatrix is
∏_i=1^k iv_i ∏_1≤ j< r≤ k (x_j-x_r).
Since the first k coordinates of a=(a_1,...,a_k+1) are distinct, the determinant does not vanish at a∈ V_k^v(h). So by Proposition 3.3.10 in <cit.>, a is a nonsingular point of a one dimensional irreducible component, V, of V_k^v(h). Thus a lies in an open neighbourhood U of V where U is a one dimensional manifold.
By Lemma <ref>, the a one dimensional manifold, U, only intersects the hyperplane H={x∈ℝ^k+1|x_k=x_k+1} once. So U must meet the open halfspace H^+:={x∈ℝ^k+1|x_k<x_k+1} and thus there is a point in V_k^v(h) with no repeated coordinates. So H_k^v(h)⊆ H_s^u(f) contains a polynomial, g, with composition v.
We can apply the same argument to the polynomial g if v<u, and keep doing this inductively until we find a polynomial with composition u. Then by Theorem <ref>, H_s^u(f) is maximal dimensional.
Any stratum equals the closure of its relative interior.
By Theorem <ref> we may suppose H_s^u(f) is maximal dimensional and at least one dimensional. We will prove the statement by induction on the dimension of the strata. If H_s^u(f) is one dimensional, then it is connected by Proposition <ref>. Thus it contains at most two relative boundary points and any open ball about a point of H_s^u(f) must contain infinitely many points from the relative interior. Thus H_s^u(f) is the closure of its relative interior.
Now suppose the statement is true for all (n-1)-dimensional strata, where n-1≥ 1. Suppose H_s^u(f) is n-dimensional and h∈ H_s^u(f). By Proposition <ref>, H_s^u(f) is connected, so for any ϵ>0, B_ϵ(h) must contain infinitely many polynomials from H_s^u(f). Since there are finitely many polynomials in H_s^u(f) with at most s distinct roots, there is a g∈ B_ϵ(h)∩ H_s^u(f) with at least s+1 distinct roots.
Then by Theorem <ref>, H_s+1^u(g) is (n-1)-dimensional and by the inductive hypothesis, g is in the closure of its relative interior. So by Theorem <ref>, any open ball about g contains a polynomial with composition u. Thus B_ϵ(h) also contains a polynomial with composition u and so h is in the closure of the relative interior of H_s^u(f).
§ COMBINATORIAL STRUCTURE
In this section we explore the combinatorial structure of the lattice of strata and show that this lattice is graded, atomic and coatomic. Then we use this to determine which compositions actually occur in a given starved polytope. Due to Theorem <ref> we may assume H_s(f) is of dimension d-s>0 for this section since the one-point lattice trivially satisfies the main combinatorial properties that follow. We begin with the notion of a graded poset.
A totally ordered subset of a poset is a chain and if a chain is maximal with respect to inclusion it is a maximal chain. A poset in which every maximal chain has the same length is called graded.
To see why we call such a poset graded: let y_0<...<y_l and z_0<...<z_l be two maximal chains of a graded poset L where y_i=z_j, for some i and j. Then we have i=j, otherwise y_0<y_1<...<y_i=z_j<z_j+1<...<z_l is a maximal chain which is not of length l+1 contradicting the gradedness of L. Thus the rank of y_i, (y_i):=i, is well defined and we can write the poset as the disjoint union L=⊔_k≥ 0 L(k), where the elements of L(k) are the rank k elements.
If s≥ 2 and H^u_s(f) is at least one dimensional, it is compact and its lattice of strata contains the empty stratum.
As observed in <cit.> Proposition 4.1, a stratum of H_s(f) is compact when s≥ 2 since we can rewrite the s first elementary symmetric polynomials in terms of the s first power sums and the second power sum defines a sphere.
For the second statement, note that when s=2, the equations
p_1(x) = c_1 and p_2(x) = c_2
defines a nonempty intersection of a hyperplane, whose normal vector is (1,1,...,1), and a (d-1)-sphere. This intersection can either be a (d-2)-sphere in the hyperplane or it can be one of the two points ±√(c_2/d)(1,1,...,1).
That is, if v=(d), and the stratum H_s^v(f) is nonempty, then there can be no other point in H_s(f) since there is no other point in H_2(f). So since H_s(f) is at least one dimensional we have H_s^v(f)=∅ and since v is smaller than any composition u, then H^u(f) contains the empty stratum.
For s=0 and s=1, the starved polytopes will have all the main combinatorial properties we are establishing. But the argument is different from the other cases, so we will restrict s to be at least 2 for now. This allows us to use Lemma <ref>, which will be very helpful. Before we proceed note that we use the convention that the empty set has dimension -1.
If s≥ 2, the lattice of strata of H_s(f) is graded and the rank of a stratum is one more than its dimension.
Suppose a stratum H_s^u(f) is strictly contained in H_s^v(f), then by Theorem <ref>, (H_s^u(f))<(H_s^v(f)). Also, by Lemma <ref>, the lattice of strata contains the empty set as its minimal element. Thus any maximal chain in the lattice of strata has length at most (H_s(f))+1=d-s+1.
Conversely, suppose we have that H_s^u(f) is strictly contained in H_s^v(f), where (H_s^u(f))<(H_s^v(f))-1, let us show that there is a stratum, H_s^w(f), with H_s^u(f)⊊ H_s^w(f)⊊ H_s^v(f).
If H_s^u(f) is empty, H_s^v(f) is at least one dimensional and by Theorem <ref> its relative interior are the polynomials with composition v. But by Lemma <ref>, it is compact and so it must contain a nonempty stratum, H_s^w(f), in its relative boundary. Then by Lemma <ref>, both H_s^v(f) and H_s^w(f) contains the empty stratum H_s^u(f).
If h∈ H_s^u(f) and there are no polynomials in H_s^u(f) with at least s distinct roots, then by Theorem <ref>, H_s^u(f) ={h}. Since H_s^u(f) is zero dimensional, H_s^v(f) is at least two dimensional and by Lemma <ref> and Proposition <ref>, it is compact and contractible. Thus its relative boundary is at least one dimensional and connected.
If H_s^u(f) is not contained in a larger stratum of H_s^v(f), it must be an isolated part of the relative boundary of H_s^v(f). But since the relative boundary is one dimensional and connected, H_s^u(f) must be the whole boundary. But this is impossible since H_s^u(f) is zero dimensional, thus there is a stratum H_s^w(f) which is strictly contained in H_s^v(f) and that strictly contains H_s^u(f).
Lastly, if there is a h∈ H_s^u(f) with at least s distinct roots, then we may assume u=v(h) by Theorem <ref>. Also, we then have that all compositions greater than u occurs in H_s(f). Since
ℓ(u)-s=(H_s^u(f))<(H_s^v(f))-1=ℓ(v)-s-1,
then ℓ(u)<ℓ(v)-1 and so there is a composition w, with u<w<v. By Theorem <ref>, H_s^w(f) is of dimension (ℓ(w)-s) and therefore H_s^u(f)⊊ H_s^w(f)⊊ H_s^v(f).
Thus any maximal chain will be at least of length d-s+1 and so any maximal chain has length d-s+1. Also, by the above argument any stratum of dimension n≥ 0 covers a stratum of dimension n-1, thus its rank must be n+1.
Next up is the notion of atomic lattices.
In a lattice with a smallest element, 0, the elements covering 0 are called atoms. The lattice is called atomic if any element can be expressed as the join of atoms.
If s≥ 2, then any n-dimensional stratum, where n>0, contains at least two distinct (n-1)-dimensional strata.
By Proposition <ref>, an n-dimensional stratum, H_s^u(f), contains an (n-1)-dimensional stratum H_s^v(f). Also, by Lemma <ref>, a nonempty stratum H_s^u(f) is compact and so its relative boundary is nonempty but not contractible. But by Proposition <ref>, H_s^v(f) is contractible and so it cannot be the whole relative boundary of H_s^u(f).
Also, since H_s^v(f) is closed, then (H_s^u(f))\ H_s^v(f) is relatively open in (H_s^u(f)) with the subspace topology. Thus (H_s^u(f))\ H_s^v(f) is (n-1)-dimensional and so there must be another (n-1)-dimensional stratum in H_s^u(f).
If s≥ 2, the lattice of strata of H_s(f) is atomic.
By convention the empty set is the join of an empty set of atoms and an atom is naturally the join of itself. Also, by Proposition <ref>, the lattice is graded and a stratum's rank is its dimension plus one, so the atoms are the zero dimensional strata.
If H_s^u(f) is an n-dimensional stratum, where n>0, then by Lemma <ref>, there are two distinct (n-1)-dimensional strata, H_s^v(f) and H_s^w(f), contained in H_s^u(f). Since H_s^v(f) and H_s^w(f) are distinct, by Proposition <ref>, any stratum that contains both must be at least n-dimensional. Since H_s^u(f) is n-dimensional and contains both H_s^v(f) and H_s^w(f), it must be the join of H_s^v(f) and H_s^w(f). By induction, both H_s^v(f) and H_s^w(f) are joins of atoms and since H_s^u(f) is the join of H_s^v(f) and H_s^w(f), it must also be a join of atoms.
Lastly, we look at a sort of converse of atomic lattices called coatomic lattices.
In a lattice with a largest element, 1, the elements covered by 1 are called coatoms. The lattice is called coatomic if any element can be expressed as the meet of coatoms.
If s≥ 2, then any n-dimensional stratum, where n<d-s-1, is contained in at least two distinct (n+1)-dimensional strata.
Let H_s^u(f) be an n-dimensional stratum. If n>0, by Theorem <ref>, it must be maximal dimensional. Thus there is a polynomial with composition u and since ℓ(u)-s=n<d-s-1, then ℓ(u)<d-1, so u is covered by at least two distinct compositions, v and w, of length ℓ(u)+1. So again by Theorem <ref>, H^v_s(f) and H^w_s(f) must be (n+1)-dimensional.
If n=0, then H_s^u(f)={h} and since d-s>1, the set H_s(f) is at least two dimensional. By Proposition <ref>, h is contained in a two dimensional stratum, H_s^w(f), and a one dimensional stratum, H_s^v(f)⊂ H_s^w(f). By Theorem <ref>, H_s^v(f) is in the one dimensional relative boundary of H_s^w(f) and h is in the relative boundary of H_s^v(f).
According to Corollary <ref>, H_s^w(f) is the closure of its relative interior, so there is a connected component, C, of (H_s^w(f)) such that h is in its closure, C. Thus, starting from h, we can traverse its boundary clockwise or counter-clockwise. But since h is one of the relative boundary points of H_s^v(f), at most one of the directions consists immediately of polynomials whose composition is v. Thus there must be some other one dimensional stratum for which h is a boundary point.
Lastly, if n=-1, then H_s^u(f) is empty. Since H_s(f) is at least one dimensional and, by Proposition <ref>, the lattice of strata is atomic, it must contain at least two atoms. Thus the empty stratum is contained in at least two zero dimensional strata.
If s≥ 2, the lattice of strata of H_s(f) is coatomic.
The argument is analogous to the proof of Proposition <ref>, just start the induction from the (d-s-1)-dimensional strata and use Lemma <ref> instead of Lemma <ref> for the induction step.
The lattice of strata of H_s(f) is graded, atomic and coatomic.
The only cases that remains to check is when s≤ 1. As we saw in the proof of Theorem <ref>, all compositions occur when s≤ 1 so the lattice is isomorphic to the lattice of compositions. The lattice of compositions is isomorphic to the face lattice of a (d-2)-dimensional simplex, S. To see this, note that any k+1 vertices in S determines a k-dimensional face of S.
Similarly there are d-1 compositions of d of length 2. So if v^i_1,...,v^i_k+1 are k+1 distinct compositions of length 2, then their first parts are distinct. We may assume they are ordered so that v^i_1_1<...<v^i_k+1_1 and so, by the argument in Lemma <ref>, their join is
v^i_1∨ ... ∨ v^i_k+1 = (v^i_1_1,v^i_2_1-v^i_1_1,v^i_3_1-v^i_2_1,...,v^i_k+1_1-v^i_k_1,d-v^i_k+1_1),
which is a composition of length k+2. Thus any bijection from the set of length 2 compositions to the set of vertices of S induces an isomorphism of lattices. Lastly, by point (i) and (v) of Theorem 2.7 in <cit.>, the face lattice of a simplex is graded, atomic and coatomic.
Similar to the cases when s≤ 1, when s=2 it can be shown that the lattice of strata is isomorphic to the face lattice of a simplex. Although this does no need to be true in general, as can be seen by intersecting H_2(f), from Example <ref>, with the affine hyperplane defined by fixing the third coefficient to be 0. This gives us a lattice which is isomorphic to the face lattice of a quadrilateral.
Thus there are examples for which the lattice of strata is not isomorphic to the face lattice of a simplex, however it would be interesting to find an answer to the following question for general d and s.
Is the lattice of strata of a starved polytope polytopal?
We finish this section with an algorithm to compute which compositions occur in H_s(f) based on the compositions of length at most s that occurs.
Let w∈ W, then w≥ v∨ u for some v,u∈ U. Thus both v and u occur in H_s(f) and so H_s^w(f) contain at least two polynomials. So by Theorem <ref>, H_s^w(f) is maximal dimensional and therefore there is a polynomial with composition w. Thus all compositions computed in the algorithm occurs in H_s(f).
Suppose a composition u, with ℓ(u)>s, occurs in H_s(f). Then by Theorem <ref>, H_s^u(f) is at least one dimensional and by Theorem <ref>, H_s^u(f) is the join of at least two distinct atoms H_s^v(f)={h} and H_s^w(f)={g}. We may assume v and w are the compositions of h and g respectively. Then v∨ w ∈ V and v∨ w ≤ u, thus u∈ W and it was not left out by the algorithm.
Step 1 in Algorithm <ref> can be accomplished using the method described in Lemma <ref>. That is, the join of u and v can be computed by first constructing the set
M={u_1,u_1+u_2,...,d,v_1,v_1+v_2,...,d}.
Next, construct the tuple (m_1,....,m_k), where the m_i's are distinct, increasingly ordered and {m_1,...,m_k}=M. Then the join of u and v is the composition
(m_1,m_2-m_1,m_3-m_2,...,m_l-m_l-1).
However, compositions and our partial order are both implemented in Sage (see <cit.>), so the algorithm can easily be implemented there.
As in <cit.>, <cit.> and <cit.> we could have focused on the intersection of Vandermonde varieties and K. That is, we could consider the set
M={x∈ℝ^d|w_1x_1^i+w_2x_2^i+...+w_dx_d^i=c_i ∀ i∈ [s]}∩ K
were w_1,...,w_d∈ℝ are positive and c_1,...,c_s∈ℝ. If x∈ M is of the form x_1=...=x_v_1<x_v_1+1=...=x_v_1+v_2<...<x_d-v_l+1=...=x_d, we associate to it the composition v(x)=(v_1,v_2,...,v_d) and for a composition u we may define a stratum of M as M^u={y∈ M|v(y)≤ u}.
When the weights w_1,...,w_d are integers (and by extension rational) then M^u is equal to ι_u (V_s^u(f)∩ K_l) (see the proof of Lemma <ref>) for some monic, univariate hyperbolic polynomial f of degree d. However, if the weighs are irrational we do not see how to interpret the set M and so we did not consider such cases. But it can be shown that for any positive real weights Theorem <ref> and Theorem <ref> also hold for M^u and that Theorem <ref> hold for the poset of strata of M.
As we lack the motivation to consider such cases we do not go into the details. But the main difference in the arguments would be to use the map W^l:M→ℝ^l given by
x↦(∑_j=1^dw_jx_j,...,∑_j=1^dw_jx_j^l),
instead of using the map P^d-l∘Π_u in the proof of Theorem <ref>, otherwise the arguments follow through the same way.
§ A SHORT NOTE ON STARVATION
We have seen that the lattice of strata of a starved polytope has many properties similar to the face lattices of polytopes, so now we discuss some results on the boundary of starved polytopes that explains the description "starved". To do this we will quickly introduce discriminants and subdiscriminants, but more information on these objects can be found in <cit.> and in Chapter 4 of <cit.>.
Let Δ_d denote the discriminant of a real, monic, univariate polynomial of degree d. This is a real polynomial in the coefficients of the univariate polynomial which has degree d and it vanishes when the corresponding univariate polynomial has a repeated root.
Let Z(Δ_d)⊆ℝ^d denote the real algebraic set given by the discriminant. Then the points of Z(Δ_d) that corresponds to polynomials with a repeated real root, splits the space of real coefficients into ⌊ d/2⌋ regions, each of which is characterised by the number of real roots that the polynomials have.
Similarly, we let Δ_d,k denote the k^th subdiscriminant of a real, monic, univariate polynomial of degree d. This is also a real polynomial in the coefficients of the univariate polynomial and the k first subdiscriminants vanish when the corresponding univariate polynomial has at most d-k distinct roots.
So if h=t^d+h_1t^d-1+...+h_d lies in the real algebraic set given by the first k subdiscriminants, Z(Δ_d,0,...,Δ_d,k)⊆ℝ^d, then h has at most d-k distinct roots. If h has exactly d-k distinct roots then it was show in Proposition 1.3.4 of <cit.>, that the tangent space of Z(Δ_d,0,...,Δ_d,k) at h, lies, in a neighbourhood of h, in the region which locally about h has the maximal number of real roots.
That is, let T_h be the tangent space of Z(Δ_d,0,...,Δ_d,k) at h, where we consider the tangent space as having been translated to h and not centred at the origin. Then there is a neighbourhood N⊂ℝ^d, of h, such that T_h∩ N consists of polynomials that have exactly n real roots and no other polynomial in N have more than n real roots.
Naturally the strata of H_s(f) labelled by compositions of length d-k, are subsets of Z(Δ_d,0,...,Δ_d,k). Now suppose h is hyperbolic and has composition u. Then if the tangent space of H_s^u(h) at h is well defined, it must be included in T_h∩ A_h_1∩ ... ∩ A_h_s, where A_h_i⊂ℝ^d is the affine hyperplane defined by fixing the i^th coordinate to be equal to h_i.
The maximal number of real roots of a polynomial in N is d since h is hyperbolic, thus T_h∩ N ∩ A_h_1∩ ... ∩ A_h_s⊂ H_s(h) and so the tangent space of H_s^u(h) at h, intersected with N, must also lie in H_s(h). Thus the strata of H_s(h) are, in a sense, concave towards H_s(h) and so in combination with Theorem <ref> the description "starved polytope" is fairly well justified.
Disclaimer. No polytopes were starved in the making of this article and the author does not condone the starving of polytopes or any other mathematical objects.
§ INDEX OF NOTATION
- [s]={1,2,....,s}
- f = t^d+f_1t^d-1+...+f_d
- H_s(f) = {h = t^d+h_1t^d-1+...+h_d|h is hyperbolic and h_i =f_i ∀ i∈ [s]}
- v(h) denotes the composition of h
- H_s^u(f) = {h∈ H_s(f)|v(h) ≤ u}
- ℓ(u) denotes the length of the composition u
- For x∈ℝ^ℓ(u), x_u = (x_1,...,x_1,...,x_ℓ(u),...,x_ℓ(u)), where x_i is repeated u_i times.
- e_i(x) is the i^th elementary symmetric polynomial in d variables.
- p_i(x) is the i^th power sum in d variables.
- V^u_s(f)={x∈ℝ^l|-e_1(x_u)-f_1=0, ..., (-1)^se_s(x_u)-f_s=0}
- K_l={x∈ℝ^l|x_1≤ ... ≤ x_l}
- For x∈ℝ^l, Π_u(x) = t^d -e_1(x_u)t^d-1+...+(-1)^de_d(x_u)
- B_ϵ(a) is the open ball around a∈ℝ^n of radius ϵ.
- u∨ v denotes the join and u∧ v denotes the meet of two elements, u and v, of a lattice.
riener2022linear
meguerditchian1992theorem
dedieu1992obreschkoff
chevallier1986courbes
kostov2011topics
stanley2011enumerative
|
http://arxiv.org/abs/2307.02212v1
|
20230705113850
|
Electric Polarization from Many-Body Neural Network Ansatz
|
[
"Xiang Li",
"Yubing Qian",
"Ji Chen"
] |
physics.chem-ph
|
[
"physics.chem-ph",
"cond-mat.dis-nn",
"cond-mat.mtrl-sci",
"physics.comp-ph"
] |
[email protected]
ByteDance Research, Zhonghang Plaza, No. 43, North 3rd Ring West Road, Haidian District, Beijing.
X. L. and Y. Q. contributed equally to this work.
ByteDance Research, Zhonghang Plaza, No. 43, North 3rd Ring West Road, Haidian District, Beijing.
School of Physics, Peking University, Beijing 100871, People’s Republic of China
[email protected]
School of Physics, Peking University, Beijing 100871, People’s Republic of China
Interdisciplinary Institute of Light-Element Quantum Materials, Frontiers
Science Center for Nano-Optoelectronics, Peking University, Beijing 100871, People’s Republic of China
Ab initio calculation of dielectric response with high-accuracy electronic structure methods is a long-standing problem, for which mean-field approaches are widely used and electron correlations are mostly treated via approximated functionals.
Here we employ a neural network wavefunction ansatz combined with quantum Monte Carlo to incorporate correlations into polarization calculations.
On a variety of systems, including isolated atoms, one-dimensional chains, two-dimensional slabs, and three-dimensional cubes,
the calculated results outperform conventional density functional theory and are consistent with the most accurate calculations and experimental data.
Furthermore, we have studied the out-of-plane dielectric constant of bilayer graphene using our method and re-established its thickness dependence.
Overall, this approach provides a powerful tool to consider electron correlation in the modern theory of polarization.
Electric Polarization from Many-Body Neural Network Ansatz
Ji Chen
August 1, 2023
==========================================================
Electric polarization plays a crucial role in electromagnetic phenomena such as ferroelectricity and piezoelectricity.
Despite its significance, a proper microscopic definition of polarization was only formulated in the 1990s
<cit.>, which revealed the hidden relation between physical polarization and the Berry phase of solid systems.
This theoretical advance leads to successful calculations of the dielectric response of solid materials from first principles <cit.>, which is critical in several fields of condensed matter physics, such as the ferroelectric and topological materials <cit.>.
However, the underlying electronic structure methods are mostly mean-field approaches such as density functional theory (DFT) <cit.>, which has its only limitation because the result depends heavily on the so-called exchange-correlation functional.
Exchange-correlation functionals can not fully account for the exact correlation effects of electrons.
In particular, widely used semi-local functionals often produce an excessive overestimate of electron susceptibility <cit.>.
Although correlated wavefunction methods, such as coupled-cluster theory
can also be employed to calculate polarization <cit.>, their high computational complexity hinders their application in solid systems.
Furthermore, most of these correlated electronic structure methods are limited in open boundary conditions (OBC) for polarization calculations, which leads to slow convergence and heavy computational costs towards the thermodynamic limit (TDL), see Fig. <ref> for a summary of the state-of-the-art methods in polarization calculations.
In addition to the conventional deterministic electronic structure methods mentioned above, quantum Monte Carlo (QMC) methods are also widely adopted for electronic structure calculations, showing favorable computational scaling and high accuracy <cit.>.
Pioneering works to study electric susceptibility
using QMC have been reported <cit.>, in which traditional Slater Jastrow type wavefunction is combined with diffusion Monte Carlo (DMC) to study polarization of hydrogen chains in periodic boundary conditions (PBC).
The main difficulty for DMC is to write down the local self-consistent Hamiltonian under a finite electric field and run calculations iteratively.
Despite the promising results on hydrogen chain <cit.>,
there are still grand challenges: multiple loops of DMC are needed for the self-consistent procedure, a complex forward walking strategy is required for evaluating the polarization, and the quality of trial wavefunction affects the accuracy of DMC.
Therefore, it is desirable to develop more accurate and efficient approaches to calculate the electric polarization of solid systems.
In recent years, there has been significant progress in the application of neural network in the electronic structure community. Neural network wavefunction ansatz combined with QMC simulations has demonstrated higher accuracy with lower computational complexity than conventional high-order wavefunction methods <cit.>.
The expressiveness of neural network overcomes the main bottleneck of traditional wavefunction ansatz in QMC,
making the approach a competitive option for state-of-the-art electronic structure calculation.
So far, the neural network QMC calculations have shown great power in treating spin systems <cit.>, molecules <cit.>, periodic models <cit.>, and real solids <cit.>.
In this work, we extend the neural network QMC calculation to the electric polarization of solid systems.
Specifically, we employ a recently developed solid neural network, dubbed DeepSolid <cit.>, in conjunction with variational Monte Carlo (VMC).
Antithetic sampling is developed for efficient computation of the Berry phase and thus the electric polarization.
Our approach has been tested on a diverse range of systems, including isolated atoms, one-dimensional chains, two-dimensional slabs, three-dimensional cubes, and bilayer graphene. The results demonstrate clear advantages of our approach over traditional methods.
To introduce our methodology, let us consider a crystal system under a finite electric field 𝐄, the enthalpy of this system is formulated below <cit.>
F[ψ]=⟨ψ|Ĥ_S|ψ⟩/⟨ψ|ψ⟩-Ω_S𝐄·𝐏[ψ] ,
where Ĥ_S denotes the supercell Hamiltonian in the absence of electric field 𝐄, and Ω_S is the supercell volume. The term -Ω_S𝐄·𝐏 represents the interaction between electric polarization density 𝐏 and electric field. However, a proper microscopic definition of 𝐏[ψ] remains absent for decades since the ordinary position operator 𝐫̂ violates the periodic boundary condition. This problem was finally solved after recognizing the polarization as the Berry phase in the Brillouin zone, according to which the polarization can be extracted from a general wavefunction ψ as follows <cit.>
𝐏[ψ]=-1/Ω_S∑_i𝐚_i/2π Imln⟨ψ|Û_i|ψ⟩/⟨ψ|ψ⟩ ,
Û_i=exp[𝐢𝐛_i·(∑_e𝐫̂_e-∑_I Z_I𝐑_I)] ,
where 𝐚_i,𝐛_i denote lattice and reciprocal lattice vectors of the supercell. Û_i serves as a periodic generalization of the position operator 𝐫̂ in solid systems and Im ln is used to extract the Berry phase within it. Note that Û_i is an intrinsic many-body operator which includes all the electron coordinates in the exponent. A charge-weighted sum of ion coordinates Z_I𝐑_I is also included to achieve translation invariance of polarization.
With the enthalpy functional formulated above, traditional methods usually start with a Hartree-Fock (HF) ansatz, which is typically expressed as follows
ψ_ HF(𝐫)= Det[e^𝐢𝐤_i·𝐫_j
u_𝐤_i(𝐫_j)] .
Electrons are treated independently of each other with a mean-field interaction in Eq. (<ref>), simplifying quantum many-body problems but also deviating from the ground truth.
To fully treat the electron correlation effects, we employ a correlated neural network wavefunction ψ_ net from DeepSolid <cit.>, whose general form reads
ψ_ net(𝐫)= Det[e^𝐢𝐤_i·𝐫_j
u_𝐤_i(𝐫_j;𝐫_≠ j)],
where 𝐫_≠ j denotes all the electron coordinates except 𝐫_j. Eq. (<ref>) resembles the form of the traditional Bloch function, while cell-periodic functions u_𝐤 are now represented using deep neural networks that rely on all electrons to accommodate electron correlations <cit.>.
Electron features 𝐫_i are converted to be periodic and permutation equivariant before being fed into neural networks, and complex-valued orbitals u_𝐤 are constructed with a pair of neural networks outputting the real and imaginary part respectively. As a result, Fermionic anti-symmetry, periodicity, and complex-valued nature are all encoded in our network, promoting it to be a legitimate and expressive ansatz for solid. See Ref. <cit.> for more details of the architecture.
Using the neural network we have constructed, the enthalpy functional outlined in Eq. (<ref>) can be efficiently minimized through variational Monte Carlo, allowing for gradual convergence to the ground truth. However, there have been significant fluctuations observed in Û_i evaluation, which seriously impedes optimization. As a solution, antithetic sampling is employed in the Monte Carlo evaluation, which reads
⟨Û_i⟩=
∫ d𝐫 |ψ(𝐫)|^2 U_i(𝐫)/∫ d𝐫 |ψ(𝐫)|^2⇒∫ d𝐫 |ψ(𝐫)|^2 U_i(𝐫)/∫ d𝐫 |ψ(𝐫)|^2,
U_i(𝐫)=1/2[U_i(𝐫)+|ψ(-𝐫)|^2/|ψ(𝐫)|^2U_i(-𝐫)].
It is worth noting that centrosymmetric cells are assumed in Eq. (<ref>) and the fluctuations can be significantly reduced through the cancellation between U_i(𝐫) and its inverted image U_i(-𝐫).
To further improve efficiency, we have employed a Kronecker-factored curvature estimator (KFAC) optimizer <cit.>, which effectively integrates second-order information into the optimization process, surpassing traditional optimizers.
See the Supplementary Material for more computational details, and the code of this work is developed at the open-source repository of DeepSolid [<https://github.com/bytedance/DeepSolid>].
Isolated atoms are the first systems selected for direct comparison with the most accurate methods and experimental data.
In our calculations, we place a single atom in a large enough box to eliminate periodic image interactions. The calculated polarizability α is shown in Tab. <ref>, which measures the linear response of the dipole moment to the applied field and has some subtle relation with the bulk susceptibility χ (see Supplementary Material).
Results from DFT with the B3LYP functional, HF, and CCSD(T) under OBC are also listed for comparison.
As can be seen from Tab. <ref>, although B3LYP is widely-trusted functional belonging to the fourth rung of the so-called Jacob's ladder of DFT, it consistently deviates from the ground truth and has a relatively large mean average error (MAE).
The behavior of DFT is due to the inaccuracy in treating the exchange-correlation effects, which can be very different for energy and polarization calculations.
In HF calculations, because of the explicit treatment of non-local exchange, deviations in polarization are significantly reduced.
CCSD(T) is the coupled cluster theory with single, double, and perturbative triple excitations, and is considered a very accurate method in the literature. It further incorporates correlation effects on top of HF wavefunction and achieves smaller MAE than HF results.
Overall, DeepSolid results are even more accurate than CCSD(T), showing that the exchange-correlation treatments in our neural network are accurate and reliable for polarization calculations.
Having demonstrated our technique with single atoms, we proceed to simulate periodic systems by arranging bonded molecules into a one-dimensional chain and a two-dimensional slab. These systems are widely known as challenging cases for conventional DFT methods, which would have a serious overestimation of their longitudinal susceptibility. This problem stems from the fact that surface charges are insensitive to the bulk charge within the system when non-local interactions are absent, and this can be solved using more accurate ab initio methods <cit.>. For the one-dimensional case, hydrogen chain (n H_2) and polyyne (n C_2) are studied, and the simulation size is pushed to 22 H_2 and 7 C_2 respectively for TDL convergence. Correlation-consistent effective core potential (ccECP) is employed for polyyne to accelerate neural network optimization and reduce fluctuation <cit.>. The final results are plotted in Fig. <ref>, which show that susceptibility calculated by DeepSolid agrees well with correlated wavefunction methods CCSD(T) and random phase approximation (RPA).
Local-density approximation (LDA) functional deviates severely from the ground truth for one-dimensional chains <cit.>, but the use of hybrid functions such as B3LYP leads to partial recovery of non-local exchange effects and, consequently, a reduction in the overshot.
HF is much better than DFT calculations, which further proves the importance of the non-local exchange effect for electric polarization calculation in this system.
As we arrange hydrogen chains periodically to form hydrogen slabs, the computational cost of high-level deterministic wavefunction methods, such as CCSD(T), grows rapidly and is soon beyond reach.
However, our approach has a lower scaling and we can obtain the first accurate polarization calculation for such a hydrogen slab (Fig. <ref>b).
And for the slab, the performances of DFT and HF compared with our accurate neural network results are similar to those observed for the chains.
To further test our method, we applied it to alkali metal hydrides and calculated their dielectric constants, allowing direct comparison with experimental results. These systems have a simple structure, consisting of alternating cations and anions, but they are of considerable research significance due to their relevance in hydrogen storage applications. <cit.>.
The high-frequency dielectric constant ϵ_∞ can be extracted through optical experiments from the following relations,
𝐃=ϵ_∞𝐄=𝐄+4π𝐏 ,
ϵ_∞ = 1 + 4πχ = n_D^2 ,
where n_D denotes the corresponding refractive index. In the visible light regime, ions are almost frozen relative to the incident light frequency and this leads to the dominance of electric polarization in ϵ_∞. During our simulation, large fluctuations of Û_i are observed when the simulation cell is tiled in all three directions. To balance the influence from finite-size error and Û_i fluctuations, we tile the conventional cell in the direction of the applied field 𝐄 and the transverse directions remain unchanged. Moreover, Burkatzki-Filippi-Dolg (BFD) pseudopotential <cit.> is used to remove inertial core electrons <cit.>.
Our calculations have been pushed to the 4×1×1 supercell and the results are plotted in Fig. <ref>. LDA and Perdew–Burke-Ernzerhof (PBE) results are also plotted for comparison, while more accurate conventional wavefunction methods are not applicable due to computational costs.
As we can see, numerical simulations and experiments agree that ϵ_∞ decreases as the alkali metal atom becomes heavy, since ϵ_∞ is inversely proportional to the cell volume in Eq. (<ref>). However, LDA and PBE functionals <cit.> tend to overestimate ϵ_∞, and the error is largest in CsH.
In contrast, our DeepSolid results agree well with the experiment for all systems, which manifests the capability of neural network wavefunction to capture non-local exchange and correlation effects.
After demonstrating the accuracy of our methods in previous sections, we now proceed to apply our method to bilayer graphene (BLG), an extensively studied 2D material system known for its rich electronic properties. Despite its fundamental importance, the precise value of the dielectric constant of BLG remains elusive and has been an important subject of both experimental and theoretical works <cit.>.
Specifically, theoretical calculations reported were either restricted to DFT level <cit.> or based on values calculated with monolayer graphene <cit.>. Here we use DeepSolid to directly calculate the out-of-plane dielectric constant ϵ_∞^⊥ of bilayer graphene.
2×2 supercells containing monolayer and equilibrium AA-stacked bilayer graphene were used.
The calculated monolayer polarizability equals 5.648(2) Bohr^3 and bilayer polarizability equals 11.557(7) Bohr^3, which agrees with the linear dependence of polarizability on the number of layers as shown in Ref. <cit.>.
Based on this and following Ref. <cit.>, one can derive the expression of the out-of-plane dielectric constant as a function of the layer separation d (see Supplementary Material):
ϵ_∞^⊥(d)=(1-2πα^ BLG_ equil/S· d)^-1 ,
where S=5.25 Å^2 denotes the area of the primitive cell.
Using the computed polarizability α^ BLG_ equil we can re-establish the relation of ϵ_∞^⊥, which is plotted in Fig. <ref>.
There are two notable limits when varying the layer separation d: as d decreases, two graphene layers coincide with each other and the system becomes metallic, which explains the diverging of ϵ_∞^⊥; as d becomes large, BLG polarization becomes negligible and vacuum contribution dominants in ϵ_∞^⊥ which approaches unity.
The separation-dependent dielectric constant will be valuable for further understanding and tuning the stacked multilayer graphene systems.
In conclusion, this work proposes an efficient and accurate method for investigating solid polarization based on the recently developed solid neural network wavefunction combined with quantum Monte Carlo. Our approach demonstrates superiority over the conventional state-of-the-art electronic structure methods. In the future, with the proposed framework, it is promising to investigate a wide range of phenomena, including ferroelectricity, topological electronic transport, quantum Hall effect, and orbital magnetization, among others, on a higher level of accuracy and with electron correlations accounted for properly. Furthermore, this work provides more possibilities for utilizing neural network applications in condensed matter physics.
§ ACKNOWLEDGEMENTS
We want to thank ByteDance Research Group for inspiration and encouragement. This work is directed and supported by Hang Li and ByteDance Research. J.C. is supported by the National Natural Science Foundation of China under Grant No. 92165101.
|
http://arxiv.org/abs/2307.00519v1
|
20230702090253
|
End-to-End Out-of-distribution Detection with Self-supervised Sampling
|
[
"Sen Pei",
"Jiaxi Sun",
"Peng Qin",
"Qi Chen",
"Xinglong Wu",
"Xun Wang"
] |
cs.CV
|
[
"cs.CV"
] |
zkFi: Privacy-Preserving and Regulation Compliant Transactions using Zero Knowledge Proofs
June 2023
Amit Chaudhary
=======================================================================================================
Out-of-distribution (OOD) detection empowers the model trained on the closed set to identify unknown data in the open world. Though many prior techniques have yielded considerable improvements, two crucial obstacles still remain. Firstly, a unified perspective has yet to be presented to view the developed arts with individual designs, which is vital for providing insights into the related directions. Secondly, most research focuses on the post-processing schemes of the pre-trained features while disregarding the superiority of end-to-end training, dramatically limiting the upper bound of OOD detection. To tackle these issues, we propose a general probabilistic framework to interpret many existing methods and an OOD-data-free model, namely Self-supervised Sampling for OOD Detection (SSOD), to unfold the potential of end-to-end learning. SSOD efficiently exploits natural OOD signals from the in-distribution (ID) data based on the local property of convolution. With these supervisions, it jointly optimizes the OOD detection and conventional ID classification. Extensive experiments reveal that SSOD establishes competitive state-of-the-art performance on many large-scale benchmarks, where it outperforms the most recent approaches, such as KNN <cit.>, by a large margin, e.g., 48.99% → 35.52% on SUN at FPR95.
§ INTRODUCTION
Out-of-distribution (OOD) detection has been recognized as crucial for the deployment of machine learning systems in reality, , computer vision applications. Traditional neural networks excel at handling in-distribution (ID) data, which is similar to the training samples. However, real-world scenarios cannot always adhere to the independent and identically distributed rules, , the i.i.d. conditions. That means the input data can vary significantly from the training images in terms of domains and categories. Thus, extensive efforts have been devoted to detecting whether input samples are OOD, ultimately bolstering classifier stability and reliability.
Current OOD detection methods primarily rely on the perspective of statistical difference, , observing distinctions between the pre-trained features of ID/OOD samples. These methods tend to use heuristic tricks to rule out OOD data in a two-stage manner, , pre-training and post-processing, which suffer from the following drawbacks compared to the end-to-end fashion. Firstly, the frozen model weights are obtained on the ID classification task with limited OOD supervision, and therefore, the extracted features inherently carry bias which is not distinguishable enough for identifying OOD data (cf. Figure <ref>). Secondly, the two-stage design yields poor scalability and efficiency since it is not suitable for scenarios without pre-trained models, , given a practical application and its corresponding dataset, a two-stage model takes comparable training tax compared to the end-to-end method while only obtaining biased features and fair to good detection results.
To tackle the issues above, this paper interprets the OOD detection task with a unified probabilistic framework (cf. Section <ref>), which can include many previous individual designs widely. To be concrete, our framework divides the multi-category classification problem into two tasks: conventional ID classification and OOD detection. According to the theoretical analysis, the deficiency encountered by traditional neural networks in identifying OOD data arises from the absence of a critical component, , an OOD factor that estimates the likelihood of images belonging to the in-distribution. Furthermore, sailing from this general foundation, we present Self-supervised Sampling for OOD Detection (SSOD), an end-to-end trainable framework w/o resorting to explicit OOD annotations (cf. Section <ref>). In contrast to the observed paths that hug synthetic OOD features, SSOD directly samples real OOD supervision from the background of training images by itself, , self-supervised, getting rid of the constraints resulted by the lack of labeled OOD data and the deviation introduced within the OOD feature syntheses stage. Extensive experiments demonstrate that the jointly end-to-end training manner significantly improves the OOD detection performance and guides the model to focus more on the object-discriminative characters instead of the meaningless background information (cf. Figure <ref>). The major contributions of this paper are summarized as follows.
* We establish a general probabilistic framework to interpret the OOD detection, where various OOD methods can be analyzed comprehensively, with main differences and key limitations clearly identified.
* To mitigate the negative impacts from pre-trained features, we design an end-to-end trainable model, namely Self-supervised Sampling for OOD Detection (SSOD), to sample real OOD signals from the ID images. SSOD can avoid the labor-intensive work of labeling/cleaning sufficient OOD images.
* SSOD is evaluated across various benchmarks and model architectures for OOD detection, where it outperforms current state-of-the-art approaches by a large margin, , improving KNN <cit.> w/ and w/o contrastive learning with -17.71% and -21.97% FPR95 on Places <cit.>, and Energy <cit.> with +2.01% AUROC and -15.10% FPR95 on SUN <cit.>. The scalability and superiority of SSOD promise its potential to be a starting point for solving the OOD detection problem.
§ RELATED WORK
We give a brief overview of the observed paths in promoting the detection of OOD data.
Score-based posterior calibration. This line of research aims to find differences between the ID and OOD data, thus designing model-specific discriminative functions to identify the OOD samples. The related work includes ODIN <cit.>, LogitNorm <cit.>, GradNorm <cit.>, ReAct <cit.>, Energy <cit.>, and CIDER <cit.>, to name a few. Generally, these methods are usually pre- or post-processing schemes that demonstrate no need for retraining the neural networks. Although these methods above report considerable performance improvements and sometimes are training efficient, they do not necessarily lead to significant generalization ability. For example, ReAct <cit.> investigates the distinct behaviors of ID and OOD data after ReLU function, and therefore, it fails to perform on architectures adopting other activations, such as GELU, Sigmoid, and Tanh, etc. Similarly, the ODIN <cit.> investigates post-processing schemes specially designed for Softmax, , the temperature scaling. These specific designs promote OOD detection but limit the model's scalability. In contrast, our SSOD doesn't suffer from this limitation as it addresses the OOD detection directed by Bayes' theorem, which holds in general scenarios.
Auxiliary supervision from synthetic OOD data. The lack of OOD supervision is a critical factor leading to unsatisfactory performance in OOD detection. Thus, significant interest has been raised in generating synthetic OOD data. Existing approaches tackling this issue can be roughly divided into two manners, which are feature and image generation. The feature generation manner samples OOD features from the ID boundary, such as VOS <cit.>, or generates them using GAN, such as BAL <cit.>. In contrast, the image generation yields more expensive training tax since it directly generates the OOD images, such as Conf <cit.>, SBO <cit.>, MG-GAN <cit.>, NAS-OOD <cit.>, CODEs <cit.>, and VITA <cit.>, etc. In summary, existing methods either employ unrealistic OOD supervision as they only consider the approximated feature space or are costly due to the generation in the original image space. Unlike prior arts, our proposed SSOD avoids both limitations by utilizing the universal local property of neural networks, extracting realistic OOD supervision from the ID images without generation cost.
§ PROBABILISTIC FRAMEWORK FOR OOD DETECTION
In this section, we first introduce the unified probabilistic OOD framework and then revisit existing OOD detection methods from the view.
§.§ Probabilistic OOD detection
The problem of OOD detection can be defined in various ways. In this paper, we formalize the task as a binary classification problem. Concretely, we consider two disjoint distributions on the data and label space, denoted as 𝒮_𝕀𝔻×𝒴_𝕀𝔻 and 𝒮_𝕆𝕆𝔻×𝒴_𝕆𝕆𝔻, representing the ID distribution and OOD distribution respectively. We note that 𝒴_𝕀𝔻 and 𝒴_𝕆𝕆𝔻 have no overlap, , 𝒴_𝕀𝔻∩𝒴_𝕆𝕆𝔻=∅. OOD detection aims to train a model, which can effectively distinguish the source distribution of a given image x. Moreover, for x ∈𝒮_𝕀𝔻×𝒴_𝕀𝔻 , it is also expected to correctly predict its corresponding category with a classifier denoted as f(·).
Supposing x is a given image sampled from the open image distribution 𝒮=𝒮_𝕀𝔻∪𝒮_𝕆𝕆𝔻,
and 𝒲_1,𝒲_2,...,𝒲_M are the sets of the M ID categories, we can obtain the following formula to compute P(x∈𝒲_i|x∈𝒮) based on the law of total probability,
P(x∈𝒲_i|x∈𝒮_𝕀𝔻)P(x∈𝒮_𝕀𝔻|x∈𝒮)+ P(x∈𝒲_i|x∈𝒮_𝕆𝕆𝔻)P(x∈𝒮_𝕆𝕆𝔻|x∈𝒮).
As P(x∈𝒲_i|x∈𝒮_𝕆𝕆𝔻)=0, Eqn.(<ref>) leads to a practical conclusion:
P(x∈𝒲_i|x∈𝒮) ≜ P(x∈𝒲_i|x∈𝒮_𝕀𝔻)P(x∈𝒮_𝕀𝔻|x∈𝒮),
where ≜ indicates the conditional equation. The conditional probability P(x∈𝒲_i|x∈𝒮_𝕀𝔻) in Eqn.(<ref>) is exactly the classification problem of ID data, termed as ID factor. P(x∈𝒮_𝕀𝔻), namely OOD factor, is the OOD detection task. Generally, OOD detection techniques aim to optimize the OOD factor without affecting the ID classification performance.
§.§ Revisit OOD methods with the probabilistic view
We interpret several classic OOD detection techniques from the perspective of our proposed probabilistic framework and find that most OOD detection methods hold P(x∈𝒲_i|x∈𝒮_𝕀𝔻) = f_i(x), , the i-th dimension of the classifier's output activated by Softmax function. Thus, the crucial point is how to compute the OOD factor P(x ∈𝒮_𝕀𝔻|x∈𝒮).
Methods based on logits, , Max-Softmax Probability (MSP) <cit.>, which directly employs the Softmax output of classifiers as the ID/OOD score, aiming to distinguish them with classification confidence. Concretely, given image x, MSP uses the following expressions to depict the procedure of OOD detection:
x→ f(·)→
x ∈𝒮_𝕆𝕆𝔻, max f(x) < γ
x ∈𝒮_𝕀𝔻, max f(x) ≥γ.
Intuitively, MSP expects the classifier f(·) to assign higher confidence, , max f(x) to ID samples and lower of that to the OOD. Obviously, for MSP, the OOD factor is built as follows:
P(x ∈𝒮_𝕀𝔻|x∈𝒮) = P(max f(x) ≥γ).
Methods based on features try to distinguish ID/OOD based on their deep features extracted by the backbone termed as h(·), including ReAct <cit.>, BAL <cit.>, VOS <cit.>, and KNN <cit.>, . Taking ReAct as an example, it builds the OOD factor P(x∈𝒮_𝕀𝔻) in a hard threshold manner with linear projection, depicted as:
P(x ∈𝒮_𝕀𝔻|x∈𝒮)= P(𝐖^⊤ReAct(h(x), c) + 𝐛≥γ),
where 𝐖^⊤ and 𝐛 are the weight matrix and bias vector, ReAct(h(x), c)=min{h(x), c} is an element-wise truncation function, and γ is a hard threshold. Instead of the OOD-syntheses-free schemes like ReAct and KNN, BAL and VOS generate ID/OOD supervision in the feature space to optimize the ID/OOD classifier with P(x ∈𝒮_𝕀𝔻|x ∈𝒮)=σ(d(h(x)), where d(·) is a discriminator.
In summary, most OOD methods approximate the OOD factor P(x∈𝒮_𝕀𝔻|x∈𝒮) by P(f(x)∈ f(𝒮_𝕀𝔻)|f(x)∈ f(𝒮)) as MSP, or P(h(x)∈ h(𝒮_𝕀𝔻)|h(x)∈ h(𝒮)) like ReAct, BAL, VOS, and KNN. We note here that f(x) is the Softmax output, and h(x) is the feature extracted by the backbone. Nevertheless, there is a significant bias introduced in f(𝒮) and h(𝒮) as f(·) and h(·) are trained for the ID classification. It adversely affects the discrimination of ID and OOD data (cf. Figure <ref>, left). Furthermore, generated OOD signals in the feature space (, BAL and VOS) do not necessarily lead to the existence of a corresponding natural OOD image. Consequently, its effectiveness in practical scenarios of the open world may be limited. To remove these obstacles, we propose to sample OOD supervision from the ID images and optimize the OOD factor directly.
§ SELF-SUPERVISED SAMPLING FOR OOD DETECTION (SSOD)
In this section, we propose a Self-supervised Sampling solution to tackle OOD Detection problem, termed as SSOD, which can estimate the OOD factor directly without resorting to explicit OOD samples. The design of SSOD is inspired by the local property, , locality[In this paper, ‘locality' represents the local property.], from convolution networks, as discussed below.
§.§ Inspiration of SSOD
Prior studies, , <cit.> and <cit.>, have demonstrated that traditional neural networks are capable of retaining spatial information. Specifically, a position of the feature map reflects the corresponding position in the input image. Thus, we may expect the potential to extract the background information from the feature maps, which can be regarded as the natural OOD samples. However, a question arises: How to design an OOD block sampler to select the positions representing background information from the feature maps?
In Figure <ref> (a), the ResNet-50 trained on ImageNet downsamples the input image and yields a corresponding feature map (ℝ^H× W× C), where each image block is projected to a feature vector (ℝ^C) located at the corresponding position. The classification head reports the category for each feature vector. We highlight the correctly classified blocks with over 40% confidence in Figure <ref> (a) and (b). The results suggest that for blocks contained in the main objects, the confidence scores are much higher, while for the backgrounds far away from the main objects, their confidence scores are extremely low, , at least lower than 40%. The confidence distribution on the feature map roots from the limited receptive fields of the cascaded convolution layers, i.e., each position of the feature map is only accessible to a local region of the input image. Inspired by this observation, we posit the Self-supervised Sampling for OOD Detection (SSOD) based on the confidence score of each block.
§.§ Formulation of SSOD
For a feature map within ℝ^C× H× W (, the channel, height, and width) produced by the neural networks, we can apply the classifier along spatial axes and obtain a confidence score map within ℝ^M× H× W, where M is the number of categories in ID data. The blocks with a low confidence score, , lower than 5%, on the ground-truth category, are recognized as OOD samples as highlighted in Figure <ref> (c) and (d). Symmetrically, an ID block sampler selects some blocks with high confidence scores, , greater than 95%, as the ID samples (cf. Figure <ref>, the green blocks) besides the global average pooling feature, which helps to balance the positive (ID) and negative (OOD) samples.
Formally, we use h(·), f_cls(·), and f_ood(·) to denote the backbone removing the classification head, the multi-category classification head, and the binary ID/OOD discrimination head, respectively. Given an input image x with label y, X^C× H× W=h(x) is the feature map. The prediction result of the classification model is:
ŷ = f_cls(GAP(h(x)))=f_cls(GAP(X^C× H× W)),
where GAP is the global average pooling operation on the spatial dimensions. Similarly, when applying f_cls on each block of X^C× H× W without pooling operation, we can get the confidence score map y^MHW = f_cls(X^C× H× W) within ℝ^M× H× W. Moreover, we pick the confidence along the target axis, , if the target label of x is j, then we collect the confidence along the j-th axis of M, yielding a target confidence map within ℝ^H× W, , y^HW. We use the ID/OOD sampler to select blocks with high/low scores on the target label as the ID/OOD supervision. Concretely, for i∈{1,2,3,...,HW}, we obtain the following self-supervised OOD labels from cls head:
y^ood_i=
0, y^HW_i < 1 - γ
1, y^HW_i ≥γ
N/A, 1 - γ≤y^HW_i < γ
,
where y^HW indicates the predicted confidence of each image block belonging to the target category, and γ is a confidence threshold, , 95%. Remind that the image blocks assigned with the positive label are highlighted as green in Figure <ref>, and the negative blocks are marked using red. We drop the left image blocks (, N/A in Eqn.(<ref>)), and therefore, they provide no ID/OOD signals during the training. With the OOD Head, we obtain the ID/OOD prediction (ŷ^ood):
ŷ^ood= f_ood(X^C× H× W).
Since only a part of the image blocks is selected as ID/OOD supervision in Eqn.(<ref>), consequently, the loss is performed on the corresponding predicted results in Eqn.(<ref>) and Eqn.(<ref>). The overall objective of SSOD is formulated with the cross entropy loss (CE):
ℒ=CE(ŷ, y)+αCE(ŷ^ood, y^ood),
where α is a balance parameter. During the training/inference phase, the OOD factor of input images can be calculated as follows:
P(x∈𝒮_𝕀𝔻)=Sigmoid(f_ood(GAP(X^C× H× W))),
where Sigmoid function is used to predict the probability of input image belonging to the ID data. With the proposed SSOD above, we can train the OOD detection branch end-to-end with realistic OOD supervisions sampled from the blocks of ID data as illustrated in Figure <ref>.
§ EXPERIMENTS
We address the following problems in this section: 1) How does SSOD perform on OOD detection benchmarks? 2) Whether SSOD is stable under different hyper-parameter settings? 3) Whether SSOD is generalizable across different model architectures?
§.§ Experimental setup
We give a brief introduction of our employed datasets and the training parameters. The detailed information is attached in Appendix.
Datasets. We perform experiments on ImageNet <cit.> and CIFAR-10 <cit.>. For ImageNet, we follow the settings from <cit.> and employ iNaturalist <cit.>, SUN <cit.>, Places <cit.>, and Textures <cit.> as the OOD images. For CIFAR-10, we select SVHN <cit.>, LSUN <cit.>, iSUN <cit.>, Places, and Textures as the OOD images. Images in CIFAR-10 and ImageNet are resized and cropped to 224× 224.
Training and evaluation. We use ResNet-50 <cit.> as our backbone and train in a total of 300 epochs. No complicated data augmentation schemes are used except for the . The learning rate starts from 1e-4 and halves every 30 epochs. We optimize all parameters using the default gradient descent method. α is set to 1.5 by default. We report the false positive rate of the OOD dataset when the true positive rate of ID images is 95%, i.e., FPR95. We also provide the area under the receiver operating characteristic curve (AUROC) and classification accuracy of ID images (ID ACC) for comparison. We keep the quantity of ID/OOD data consistent.
Selection of comparable techniques. We choose both the classic and latest methods in dealing with OOD detection for comparisons. With regard to the classic schemes, we select the MSP <cit.>, MaDist <cit.>, ODIN <cit.>, GODIN <cit.>, CSI <cit.>, and MOS <cit.>. Besides, we also use the Energy <cit.>, which is the representative of the score-based calibration method, and the KNN <cit.>, which is one of the latest schemes, as our comparable methods during the experiments. ResNet-18 and ResNet-50 <cit.> are chosen as the backbone of CIFAR-10 and ImageNet. SSOD employs the pre-trained ResNet-50 with a classification accuracy of 76.13% released by the official PyTorch community. We note that KNN <cit.> has two different versions, , w/ and w/o contrastive learning. For a fair comparison with other methods, we employ no contrastive learning tricks for all comparable techniques.
§.§ Comparisons with state-of-the-arts
We report the main results and answer the first question proposed at the beginning of Section.<ref>, , the performance of SSOD on OOD benchmarks. After that, we analyze the failure cases of SSOD.
OOD detection results on ImageNet. We use natural images not included in ImageNet <cit.> as the OOD set, such as iNaturalist <cit.>, SUN <cit.>, and Places <cit.>. We randomly select about 10k OOD images for each dataset following <cit.>. All methods use no contrastive loss during the training. The detection results are shown in Table <ref>. A part of the results come from <cit.>.
From the results depicted in Table <ref>, we can notice that SSOD reduces the false positive rate (FPR95) over 13.21% and improves the AUROC about 1.63% on average. Specifically, on iNaturalist <cit.>, which consists of natural landscape images, SSOD significantly reduces the FPR95 by over 15.96%, establishing the competitive state-of-the-art performance. Considering the Places <cit.>, which contains pictures such as the creek, field, and urban city, the objects included in these images have high similarity with that in ImageNet, and we argue that this phenomenon contributes to the lower improvements on this dataset. Consistently, the improvements of AUROC are also considerable on iNaturalist <cit.> and SUN <cit.>, evidencing the efficiency of our proposed SSOD.
OOD detection results on CIFAR-10. Since images appearing in CIFAR-10 <cit.> are smaller than that in ImageNet <cit.>, i.e., 32× 32, we use ResNet-18 <cit.> as the backbone for all comparable methods. Just as before, we use no contrastive loss during the training. Since SSOD extracts background information from the last feature maps and images in CIFAR-10 are too small, we resize the ID and OOD data to 224× 224 with RGB channels, yielding bigger feature maps. The experimental results are depicted in Table <ref>. The last column of Table <ref> demonstrates the comparison between SSOD and the previous methods. We can notice that on OOD images such as SVHN <cit.>, LSUN <cit.>, and iSUN <cit.>, SSOD reports comparable performance as the best previous schemes with a marginal drop, i.e., less than 4.06%. On large-scale datasets such as Textures <cit.> and Places <cit.>, SSOD yields significant performance improvements compared to the current state-of-the-art techniques, specifically, reporting about 20.68% performance gain (FPR95) on Places <cit.>. Besides, with regard to the overall OOD detection ability, we can see that SSOD reduces the false positive rate over 3.84% on the aforementioned five datasets on average and improves the AUROC about 0.78%, which can be established as a new competitive OOD detection scheme.
Analysis of the failure cases. We tell the failure cases encountered by SSOD and point out its limitations. Recall that SSOD extracts OOD information from the background of training images and employs them as the proxy of OOD characters, revealing the potential of suffering limited diversity of the OOD supervision if the training images are not diverse. This phenomenon will be perceived if the OOD data is totally different in domains as the training images. For example, the training images are natural scenes, while the testing OOD data is synthetic color blocks or textures. To check this issue, We train the SSOD on ImageNet <cit.> while testing it on Textures <cit.>. From the results depicted in Table <ref>, though SSOD achieves top-ranked performance, it is worse than KNN <cit.>, increasing the FPR95 by about 38.46%. This issue is caused by the overlap between ImageNet and Textures. Concretely, many images in the Textures carry vital symbols of objects included in ImageNet (cf. Figure <ref>). These overlaps lead to the inefficiency of SSOD during the comparison in Table <ref>.
§.§ Ablation Study
We tackle the last two problems posed at the beginning of this section, i.e., the stability and scalability of our proposed SSOD.
Ablations on the hyper-parameter α. Recall that α controls the importance of loss generated by the OOD Head, balancing classifiers' classification performance and OOD detection ability. We employ the CIFAR-10 <cit.> and Places <cit.> as the ID and OOD data to validate the stability of α. SSOD uses ResNet-18 as the backbone. From the ablations depicted in Table <ref>, we notice that with the increasing of α, the classifier detects OOD input better, while the ID ACC is gradually descending. We expect to boost the robustness of classifiers while not affecting the model's performance. Therefore, the α equals 1.5 in our experiments.
OOD detection across different model architectures. We use ImageNet and Places as the ID and OOD data. Considering the deployment on portable devices, we test both the conventional and lite models, such as ResNet-50 <cit.>, DenseNet-121 <cit.>, RegNet (Y-800MF) <cit.>, and MobileNet <cit.>. Compared to Table <ref>, all methods shown in Table <ref> achieve state-of-the-art performance, evidencing the scalability of SSOD.
Imbalance issue between ID/OOD features. During the training of the OOD head, we obtain much more background features since the objects only occupy a small part of the image. To promote training stability, we design three ways to tackle this issue, which are Loss Weighting (LW), Data Resampling (DR), and Loss-Wise Balance (LWB). LW multiplies a balance factor on the loss generated by the background features, DR samples equivalent ID/OOD features within each image, and LWB calculates the cross entropy generated by the ID/OOD features separately and picks their mean value as the loss objective. CIFAR-10 <cit.> and Places <cit.> are the ID/OOD data. Based on the ablations depicted in Table <ref>, SSOD employs LWB for data balancing.
§ CONCLUSION AND DISCUSSION
This paper proposes a probabilistic framework that divides the OOD detection problem into two factors, , ID and OOD factors. This provides a comprehensive overview of existing OOD methods and highlights the critical constraint of relying on pre-trained features. To address this limitation, we introduce an end-to-end scheme called SSOD which trains the OOD detection objective jointly with the ID classification. This approach leverages OOD supervision from the background information of ID images, eliminating the need for additional costs. Extensive experiments have validated that SSOD achieves state-of-the-art performance in detecting OOD data.
To the best of our knowledge, SSOD is the first method that generates natural OOD supervision to unlock the potential of the end-to-end paradigm. However, SSOD is based on the local property from convolutions, which are not applied for transformers built with cascaded attention and FFN layers, , ViT. Thus, discovering a more general self-supervised OOD sampler for different network architectures is a valuable and necessary direction.
plainnat
§ DATASET
We introduce the details of datasets in this section.
ImageNet. ImageNet <cit.> is well known in image classification problems, containing 1000 classes from the natural scene such as tiger, goldfish, house, to name a few. This dataset is usually used as the in-distribution data, expecting to get higher confidence from the classifier.
CIFAR-10. CIFAR-10 <cit.> is smaller compared to the ImageNet. It consists of tiny images within 10 classes, such as airplane, bird, dog, etc. All images contained in CIFAR-10 are in the shape of 32× 32. In our experiments, we treat CIFAR-10 as the in-distribution data and resize images into 224× 224 to get bigger feature maps. Noting that this operation introduces no significant influence on the classification performance.
SVHN. The full name of SVHN <cit.> is the street view house number, consisting of digit numbers from the natural street view. Following <cit.>, we randomly select 10k images from this dataset to serve as the out-of-distribution data. All pictures from OOD data are expected to get lower confidence from the classifier.
LSUN. This dataset is used for visual recognition, which was presented by <cit.>. It consists of over one million labeled images, including 10 scene and 20 object categories. Following <cit.>, 10k images from LSUN are treated as the OOD data.
iNaturalist. The existing dataset in the image classification problem usually has a uniform distribution across different objects and categories. However, in the real world, the images could be heavily imbalanced. To bridge this gap between experimental and practical settings, iNaturalist <cit.>, consisting of over 859k images within about 5k species (i.e., planets and animals), is presented. We randomly select 10k images from this dataset to serve as the OOD pictures.
iSUN. This dataset is constructed based on the SUN <cit.>. iSUN <cit.> is a standard dataset for scene understanding, containing over 20k images from SUN database. We use 10k iSUN images as the OOD data.
Textures. This dataset consists of images carrying vital characters, i.e., patterns and textures, of natural objects. Presented in <cit.>, Textures aims at supporting the analytical dimension in image understanding. In our experiments, we treat Textures as the OOD data.
Places. This dataset is used for scene recognition which was presented by <cit.>, including over 10 million images such as badlands, bamboo forest, canal, and etc. We use a part, , 10k, of these images from <cit.> to play the role of OOD data.
§ TRAINING DETAILS
All images used in our experiments are resized to 224× 224. We just use simple data transformations, such as and . We use as the default optimizer. The learning rate starts from 1e-4 and halves every 30 epochs. The experiment runs on 8 Nvidia Telsa V100[https://www.nvidia.com/en-us/data-center/v100/] GPUs using distributed training pipeline of PyTorch[https://pytorch.org/]. The batch size is set to 256, , 32×8. No complicated tricks are used during the training and inference phases. We store the checkpoints yielding the best FPR95 performance. For more details about the training and architectures, please refer to our attached source codes in the Supplementary Material.
|
http://arxiv.org/abs/2307.02674v1
|
20230705220531
|
Extending a Physics-Informed Machine Learning Network for Superresolution Studies of Rayleigh-Bénard Convection
|
[
"Diane M. Salim",
"Blakesley Burkhart",
"David Sondak"
] |
physics.flu-dyn
|
[
"physics.flu-dyn",
"astro-ph.GA"
] |
Superresolution Studies of Rayleigh-Bénard Convection
1Department of Physics and Astronomy, Rutgers University, 136 Frelinghuysen Rd, Piscataway, NJ 08854, USA
2Center for Computational Astrophysics, Flatiron Institute, 162 Fifth Avenue, New York, NY 10010, USA
3Dassault Systemes Simulia Corp.
Salim, Burkhart and Sondak
A prominent bottleneck for advancing our understanding of astrophysical turbulence is the limited resolution of numerical simulations, which inhibits fully sampling scales in the inertial range. Machine learning (ML) techniques have demonstrated promise in up-scaling resolution in both image analysis and numerical simulations (i.e., superresolution). Here we employ and further develop a physics-constrained convolutional neural network (CNN) ML model called “MeshFreeFlowNet” for superresolution studies of turbulent systems. The MeshFreeFlowNet CNN is trained both on the simulation images as well as the evaluated PDEs, making it sensitive to the underlying physics of a particular fluid system. In particular, we aim to generate a superresolution framework for 2D turbulent Rayleigh-Bénard convection (RBC) generated with the Dedalus code. We modify the MeshFreeFlowNet architecture to include the full set of simulation PDEs and the boundary conditions. Our training set includes fully developed turbulence sampling Rayleigh numbers (Ra) of Ra=10^6-10^10. We evaluate the success of the learned simulations by comparing the direct Dedalus simulation power spectra to the predicted CNN output power spectra. We compare both ground truth and predicted power spectral inertial range scalings to theoretical predictions. We find that the network performs well at all Ra studied here in recovering large-scale information, including the inertial range slopes. We find that our updated architecture performs well on laminar and turbulent flows, but the results towards smaller-scales are significantly better as the flow transitions to more turbulent regimes. This is likely because more turbulent systems have a rich variety of structures at many length scales compared to laminar flows. We also find that the superresolution prediction is overly dissipative at smaller scales than that of the inertial range.
§ INTRODUCTION
Many astrophysical and geophysical fluid flows are governed by the phenomenon of thermal convection <cit.> in which fluid motion is driven by a source of heat. Systems of scientific and engineering interest typically have parameter regimes that lead to turbulent flow fields. Turbulence is a ubiquitous nonlinear, multiscale phenomenon and is often considered one of the last remaining open problems in classical physics. The equations governing fluid flow have been studied for well over a century, but have remained analytically intractable in all but the simplest cases and a simple but general phenomonology of turbulence has proved elusive. With the rapid rise in computational resources, numerical simulations have proven to be an indispensable and crucial tool for complementing theory and experiments in studying turbulent flows and making accurate predictions of quantities of interest in systems involving turbulence <cit.>. Moreover, the data generated by numerical simulations of turbulence has lead to creative and powerful data analysis techniques and given rise to some of the early data-driven modeling efforts <cit.>. Nevertheless, simulations of turbulence face their own challenges, most notably that achieving the parameter regimes found in nature requires computational resources that are not available now or for the foreseeable future <cit.>. In light of these challenges, researchers have developed efficient models to study turbulent flows at reduced computational cost while retaining acceptable accuracy.
The spatial and temporal resolution of numerical simulations is limited, which restricts the scales that can be represented in a simulation. Turbulence is characterized by a high Reynolds number (Re), which represents the ratio of inertial to viscous forces. In a direct numerical simulation (DNS), all the scales of the turbulent flow are fully resolved and the equations of motion are solved to machine precision. The numerical resolution of DNS scales roughly with Re^3 <cit.> or worse <cit.>. It is not uncommon for astrophysical and geophysical flows to have Re > 10^9, which results in a cost-prohibitive DNS. To overcome this issue, researchers have developed many creative models including reduced order models <cit.> and turbulence models <cit.>. In recent years, the emergence of ML has given rise to a resurgence in the development of data-driven models to augment and enhance the current modeling paradigm.
Simulations of turbulence have led to the development of numerous algorithms and sophisticated statistical diagnostic techniques <cit.> for working with large datasets over the past half-century. Over the past two decades, new computing paradigms and novel ML algorithms have been applied to turbulence datasets in an attempt to shed light on turbulence <cit.>, develop new turbulence models <cit.>, create novel reduced order models of turbulence and chaotic systems <cit.>, augment and enhance traditional numerical methods <cit.>, and generate statistically realistic turbulence datasets and boundary and initial conditions without running large simulations <cit.>. There has recently been substantial interest in the development and application of superresolution algorithms to turbulence datasets <cit.>. Indeed, because turbulence datasets can be large and stress storage resources, it may be beneficial to save very low-resolution data and reconstruct DNS-level detail when the dataset is ready to be used. Another intriguing application could be the reconstruction of DNS results from large eddy simulations. This would circumvent the need to perform computationally intensive DNS runs while potentially delivering DNS quality results.
Progress at the intersection of turbulence and superresolution is rapidly evolving. Researchers have developed supervised models with features such as downsampled skip-connection/multiscale models <cit.> and PDE-solving capacities through MLP branches and subsequent PDE losses, <cit.>, as well as unsupervised <cit.> neural network architectures for superresolution of turbulent flows. In the present work, we build off of MeshFreeFlowNet, first presented in <cit.> (hereafter J20), and enhance that model by injecting additional physical constraints including boundary conditions and the divergence-free constraint on the velocity field. Additionally, we extend the Rayleigh-Bénard dataset used in that work, a system whose degree of turbulence is characterised by the Rayleigh number (Ra), which is the ratio of inertial driven buoyancy forces to diffusive forces. In this study we run simulations to higher Ra than that presented in J20 and ensure that the training samples for each Ra are from the statistically steady state and sampled over several eddy turnover times.
The governing equations and a review of the MeshFreeFlowNet architecture are presented in Section <ref>. The main results are presented in Section <ref> followed by a discussion in Section <ref>. Conclusions are drawn and potential future avenues are discussed in Section <ref>.
§ METHODOLOGY
§.§ Simulations of Rayleigh-Bénard Convection
A physics-informed CNN is explored for super-resolution of data generated from simulations of turbulent RBC. This physical system was selected because it is representative of many astrophysical and geophysical fluid scenarios. RBC is concerned with the buoyancy-driven flow of a fluid heated from below and cooled from above. The system is set up as a fluid under the influence of gravity and confined between two parallel plates of differing temperatures which are separated by a distance H (see Figure <ref>). The Boussinesq approximation is used which assumes that density varies linearly with temperature in the buoyancy force and is otherwise negligible. The nondimensional Boussinesq equations are solved for the nondimensional velocity =u, v and the nondimensional temperature T in a two-dimensional domain with dimensionless 𝐱=(x, z) where x∈[0, 1] and z∈[-1/2, 1/2]. The governing partial differential equations (PDEs) are
ℛ_M = ∂∂ t + ·∇ + ∇ P - ν_*∇^2 - T = 0
ℛ_C = ∇· = 0
ℛ_E = ∂ T∂ t + ·∇ T - κ_*∇^2T = 0
where
ℛ_M, ℛ_C, and ℛ_E are the residuals for the momentum, continuity, and energy equations, respectively, = u, v, P, T is the solution vector, and 𝐳 is the unit vector in the wall-normal direction. Note that the solution residuals are identically zero for satisfying the governing equations.
Equations (<ref>)- (<ref>) were obtained by nondimensionalizing time by the free-fall time τ = √(H/g^'), velocity by the free-fall speed u_ff=H/τ, pressure by the dynamic pressure P_dyn=ρ_0u_ff^2, and temperature by half the temperature difference between the top and bottom plates T_Δ = Δ T / 2. The free-fall time involves the reduced gravity g^' = gα_VΔ T /2 where g is the acceleration due to gravity and α_V is the coefficient of volumetric expansion. This nondimensionalization results in the two primary dimensionless parameters in equations (<ref>)- (<ref>),
ν_* = ν√(g^'H^3)
κ_* = κ√(g^'H^3).
These dimensionless diffusivities are related to the classical dimensionless parameters via
ν_* = √(16 PrRa)
κ_* = √(16RaPr)
where
Ra = gα_VΔ T H^3νκ
is the Rayleigh number, which represents the ratio of buoyancy driven inertial forces to viscous forces and
Pr = νκ
is the Prandtl number, which is a fluid property representing the ratio of kinematic to thermal diffusivity. As a point of reference, Ra is roughly related to the Reynolds number Re via Ra∼ Re^1/2 for convective flows. Turbulent thermal convection involves extraordinarily large values of Ra and a fundamental scientific question is concerned with the amount of heat transported in such turbulent systems. The Nusselt number (Nu) is the primary diagnostic dimensionless parameter in RBC. It represents the amount of heat transported from the bottom plate to the top plate. Given the non-dimensional formulation presented above, the unsteady Nu is defined as
Nu_tt = -.dTdz|_z_wall
where · represents a spatial average over the plane parallel to the walls. In statistically steady state, after integrating over the width of the fluid layer, the Nu can be written
Nu = 1 + <vT>κ_*.
From (<ref>), it is clear that in the purely conductive state (i.e. when v=0), Nu=1. Equations (<ref>)- (<ref>) use periodic boundary conditions in the x direction for both velocity and temperature. No-slip velocity boundary conditions are used at the top and bottom walls. The temperature at the top wall is T(z=1/2)=-1/2 and the temperature at the bottom wall is T(z=-1/2)=1/2. The system of equations was solved numerically and the resulting fields were used as the training data for the ML algorithm. The unsteady Nu was monitored to determine when the simulation reached a statistically steady state.
We generate the training data set for the ML pipeline using the spectral method code Dedalus <cit.>. Dedalus solves general sets of partial differential equations using a spectral method approach and, for the purpose of this paper, we solve the RBC Equations (<ref>)- (<ref>).
The low Ra simulations use a spatial resolution of N_x=128 and N_z=512
while the higher Ra simulations use N_x=256 and N_z=1024
Our simulation setup is similar to J20, but we make several modifications. First, J20 had only considered an oscillatory RBC system in transition before a statistically steady-state was reached. In our work we seek to understand the performance of the super-resolution network in the turbulent steady state. Second, J20 considered Ra in the range of Ra=10^4-10^8, which is non-turbulent to mildly turbulent. A primary aim of our study is to understand the performance of MeshFreeFlowNet at large Ra (up to 10^10) which is in the highly-turbulent regime. Therefore our study includes simulations that span Ra=10^6 (non-turbulent) to R=10^10 (turbulent) until a statistically steady state is reached. Pr=1 is used in all simulations.
In Figure <ref> we show the Nu evolution of our simulations (coloured lines). We overplot the J20 training set as a dashed black line for reference. The J20 simulation had only begun to reach a statistically steady state. In developing our training data, we use the Nu evolution to estimate the time after which statistically steady-state has been reached. This typically occurs after 250 time steps. From this subset we randomly sample low resolution “slab” of 128×128 pixels in the spatial dimension and 8 time steps in the time dimension to use as training examples. In total, 3000 slabs are extracted for training and 2 for validation for each simulation run.
§.§ Architectural backbone: MeshFreeFlowNet
The starting point for the present work is the “MeshFreeFlowNet” neural network architecture pipeline <cit.>. The input to train MeshFreeFlowNet is the coarsened 2D temporal simulation slabs described above. MeshFreeFlowNet also uses the underlying partial differential equations and outputs the high-resolution counterpart of the coarse inputs. We now review the different steps that the architecture follows in generating the output.
The spatial and temporal data of the four simulated fields (temperature, pressure, and two components of velocity) are passed through a 3-dimensional (3D) CNN, with each field being a channel, to extract image features. J20 and subsequently this work employ the U-Net architecture first presented in <cit.>. J20's implementation of U-Net differs from that of the original by utilising residue blocks instead of individual convolutional layers. The U-Net is comprised of a contractive part and expansive part, both of which contain a residue block of 3 convolutional layers (1×1, 3×3, 1×1), a batch normalisation layer and a ReLU activation layer. However, following this block in the contractive part is a maxpooling layer of stride 2, whereas in the expansive part it is followed by a layer performing nearest neighbour upsampling. In Figure <ref>, a series of downscaling convolutions representing contractive parts are denoted by pink arrows and series of upscaling convolutions representing the expansive parts by orange arrows. The resultant tensor is called the Latent Context Grid (LCG), visualised as the pink-orange array in Figure <ref> and is of the same dimension of the initial low-resolution input.
While the above architecture is standard practice for the application of CNNs in analyzing images, MeshFreeFlowNet also includes architectural features for inputing and evaluating the PDEs of the system and generating an additional equation loss function. This “physics-informed” extension is characterized by a multilayer-perceptron (MLP) branch that aims to improve upon the physical plausibility of the final high-resolution prediction. Such physics-informed losses have been widely used in recent years <cit.>.
The LCG is randomly sampled at 1024 points in space and time, represented in Figure <ref> as the n points at (x_n, z_n, t_n) being extracted from the LCG. For each n^th point, the 3 spatio-temporal query location points, denoted as the 3 pink-orange squares in the left hand side of the MLP diagram in Figure <ref>, as well as the set of neighbouring vertices that bound x_n for each of the four fields,
denoted by the yellow block in Figure <ref>, are concatenated to form
the input array for the MLP. This input array is also concatenated to the output of the subsequent 4 fully-connected layers in the MLP, which are denoted as the purple blocks in Figure <ref>.
A latent context vector is obtained at each point in space and time, and sampling at arbitrary locations is made possible by trilinear interpolation. The sampled locations and corresponding latent context vectors are passed through the MLP. The final four-dimensional output of this MLP contains the prediction of the values of temperature T, pressure P, and vertical and horizontal components of velocity û and v̂, respectively, represented by the pink-blue blocks in Figure <ref> and henceforth denoted as 𝐲:
𝐲 :=
u, v, P, T
The superresolved, high-resolution prediction 𝐲 is compared to the values at the same location in the high resolution DNS
using the mean-squared-error loss function
ℒ_𝒫 = 1N_B∑_i=1^N_B[1N∑_j=1^N𝐲_j^i - 𝐲_j^i·𝐲_j^i - 𝐲_j^i]
where N_B is the mini-batch size and N is the number of points for each slab in the training point.
The known underlying PDEs of the system are evaluated at the randomly sampled points from the intermediate LCG.
The PDE loss is given by
ℒ_ℰ = 1N_B∑_i=1^N_B[1N∑_j=1^N|ℛ_M| + |ℛ_C| + |ℛ_E|]
The final loss is a combination of the predictive loss and the equation loss weighted by a constant γ:
ℒ = ℒ_𝒫 + γℒ_ℰ,
where γ was determined to be γ=0.05 in J20. We adopt γ=0.05 for the remainder of the paper. Further details on the network architecture can be found in the original J20 work.
§.§ This study's architectural modifications
In this work we
add two additional physical constraints to the pipeline. The first additional physical constraint is the inclusion of RBC systems' full set of PDEs in determining ℒ_ℰ. The original J20 implementation neglected the inclusion of the mass continuity Equation (<ref>) in considering their PDE loss. This modification is reflected in the second to last term in (<ref>). We further implement physical constraints by introducing a boundary loss ℒ_ℬ, which mirrors the procedure of the prediction loss except that it only compares the values on the boundaries:
ℒ_ℬ = 1N_B∑_i=1^N_B[1N_b∑_j=1^N_b𝐲_j^i - 𝐲_j^i·𝐲_j^i - 𝐲_j^i]
where N_b is the number of boundary points and the sum is understood to be only over the points on the boundary.
When considering this additional loss, the resultant loss is the total sum of the predictive loss, the equation loss, and the boundary loss:
ℒ = ℒ_𝒫 + γℒ_ℰ + ℒ_ℬ.
Note that in the current work the equation loss and boundary loss use the same weighting factor γ.
§ RESULTS
§.§ Training results
We present the results of training our adopted neural networks for 100 epochs, conducted for each of the simulations considered of Ra=10^6, 10^8, 10^9 and 10^10. The training runs for each Ra are conducted independently, such that the training of one Ra value has no influence on that of the other Ra values explored in this study. Figure <ref> shows the evolution by training epoch of the training (solid lines) and validation (dashed lines) losses as defined by (<ref>) on the top row, as well as the R^2 evolution on the bottom row.
The R^2 is called the coefficient of determination and is a measure of goodness-of-fit, i.e., the degree to which the observed outcomes (in our case, the field values of super-resolved predictions) match that of the theoretical model (being the original DNS simulation here), with a perfect fit being indicated by an R^2 value of 1. This is shown as the grey line at R^2=1 in the bottom panels of Figure <ref>.
We observe that the validation curves across all metrics are less stable at higher Rayleigh numbers. Higher Rayleigh number simulations exhibit fields with enhanced turbulence and mode interaction and thus exhibit more complex features. It is therefore not unreasonable that the increasing complexity in more turbulent flows results in an increased difficulty in replicating the absolute values of the DNS fields. We also observe that training with the inclusion of the full set of PDEs and boundary loss produced similar results as training only with images. This suggests that even with additional physical constraints, the main factor driving the network's prediction is the spatial (image) features of the simulation.
To analyse the performance of evaluating our fully trained models, we coarsen the native resolution of each of the full DNS runs in our suite by a factor of 4, and pass it through the architecture described in Section <ref> in evaluation mode, to attain SR predictions of the same dimensions as the native DNS runs. The following discussions will refer to the results of these predictions attained in evaluation mode.
In Figures <ref>-<ref>, we showcase a comparison of a snapshot of the DNS run's u-velocity, v-velocity and temperature fields and the corresponding super-resolved predictions of these fields. In the one non-turbulent regime at Ra=10^6 in this investigation, shown in Figure <ref>, we see that there are no small-scale or high-frequency structures in the native DNS; only the large oscillatory modes. The SR prediction does an excellent job at reproducing these large modes. However, at the very small scales, the super-resolved prediction results in the generation of systematic, rectangular, high-frequency features. The energies at the small scales of the SR prediction would therefore contain more energy than that of the DNS at the same scales and this would in turn make the SR prediction less dissipative than the DNS. This kind of artifact is often observed in networks which exhibit architectures that include deconvolution layers <cit.> and is known as the “checkerboard effect”. We further highlight this effect when discussing the power spectrum later in this section.
We see that qualitatively, smaller and smaller structures begin to appear in the DNS as we get to increasingly high Rayleigh numbers as shown in Figures <ref>,<ref> and <ref>.
Again, the SR does an excellent job at reproducing the large and medium scale features of the flow. However, the small scales predicted by the SR are somewhat smeared out. This indicates that the SR predictions are more dissipative than their DNS counterparts in these regimes.
§.§ Power Spectra Analysis
To quantify the degree to which the SR network described in Section <ref> captures the physical properties of the DNS, we employ the power spectrum; a two-point statistical tool to determine how energy is distributed as a function of scale.
In Figure <ref>,
shows energy spectra for slices along the streamwise direction averaged over the wall-normal coordinate and time, which we refer to as the averaged energy spectra. The averaged total kinetic energy spectrum and temperature spectrum are denoted by E_U(k) and E_T(k), respectively.
Figure <ref> depicts comparisons of the DNS and SR energy spectra for each of the Rayleigh numbers investigated in this study, in increasing order from top to bottom. The standard deviation in time
is shown in the shaded areas above and below the lines.
We also show the linear fit of the spectra within the approximate inertial range of k=3-15
to quantify the degree to which the slope deviates from that of the DNS and the analytic theoretical slopes of k^-11/5 for the total kinetic energy, and k^-7/5 for the temperature fields for 2D RBC <cit.>.
We choose the inertial range of k=3-15 because this is the range after which dissipation is observed for the fluid simulations presented in other works <cit.>.
We make the following observations regarding these power spectra. Firstly, we take note that the oscillatory nature of the low-Ra flows is seen very prominently. The Ra=10^6 and 10^8 power spectra show clear systematic fluctuations within the chosen inertial range, and these oscillations continue for the entire range of k values in the 10^6 case, showing that this system might not really exhibit a proper inertial range. Said oscillations continue to about k=50 in the 10^8 case. Oscillatory fluctuations signaling the presence of the largest scale modes are observed consistently across the power spectra of all Ra cases in the lowest k values, but the power spectra in the chosen inertial range and dissipation scales are visually much smoother in the 10^9 and 10^10 cases, providing clear evidence of the turbulent nature of the flows in these high Ra simulations. This has important implications to consider because the theoretical slopes shown in the dash-dot grey lines underneath the power spectra in Figure <ref> are calibrated against turbulent flows, thus a comparison to theory becomes more appropriate as we get to the higher Ra cases.
Turning our attention again to the top row of Figure <ref>, which showcases the results of the non-turbulent, Ra=10^6 regime, we notice that the aforementioned artificial high-frequency feature spatially manifesting as systematic, grid-like structures in Figure <ref> can be clearly observed in the spike in the low-k energies in both the total kinetic energy and temperature fields' spectra, signaling that this is a nonphysical feature and instead an artifact of the NN pipeline described in Section <ref>.
In contrast, the energy spectra of the super-resolved fields in the turbulent (Ra=10^8-10^10) regimes approximately mirror the shape of the DNS spectra more closely, albeit at lower energies which signals a more dissipative field. This suggests that although the energies are not captured faithfully to the smallest scales, which is evident in the smoothing out of small-scale features seen in Figures <ref> and <ref>, the resultant field is more physically realistic than that produced by the non-turbulent case.
The above observations and results are generally consistent between the E_U(k) and E_T(k), but especially at the smallest scales at high Ra, the power spectra between the DNS and SR predictions seems to agree slightly better in E_U(k).
As these above observations would not have been obvious from a judgement based purely on the final loss and R^2 values, we have demonstrated the critical value of employing the power spectrum to critique the degree to which an ML-learnt turbulent field is physically realistic.
To investigate the degree to which the absolute value of the line of best fit to the SR field power spectra in the inertial range deviate from analytical theory, we calculate the relative error between the time-averaged fit and the theoretical slope.
The results from the cases considered are shown in Figure <ref>. A value of 0 indicates a perfect agreement between the SR fields' power spectra's slope of best fit and the theoretical values of m_theory=-11/5 for the total kinetic energy and m_theory=-7/5 for the temperature fields. Therefore, larger deviations from 0 would signify a greater departure from a physically realistic field. The error bars in Figure <ref> indicate the standard deviation in time of the relative error, so we may deem points with error bars that touch 0 to have sufficient agreement between the power law slope of the power spectrum and that of theory.
Whilst most relative error values of the total kinetic energy field represented by the yellow circles in Figure <ref> lies around 0 within uncertainty, there is a clear trend demonstrating that the relative error of the temperature field, represented by the blue squares in Figure <ref>, decreases with increasing Ra, with the non-turbulent case at Ra=10^6 showing the greatest deviation from 0. This deviation decreases with increasing Ra. At the highest Ra of 10^9 and 10^10, a deviation of 0 falls within the uncertainty of time variations.
From these analyses, it is evident that the NN-based pipeline that had been given physics-informed priors, is more adept at describing energy and temperature spectra on most scales in more turbulent systems, whereas the full spectrum of non-turbulent or transient fluid flows are not efficiently captured by this method. This result testifies to the ML pipeline's ability to recreate the statistics in the inertial range of a system that exhibits a broad range of spatial frequencies. However, we also emphasize that whilst these results for the inertial range appear promising, upon re-examination of Figure <ref> it is clear that these slopes start deviating from theoretical slopes and DNS power spectra rapidly as one starts to consider the smaller, dissipative scales. We therefore reiterate our conclusion that the method presented in Section <ref> is adequate for super-resolution to capture large scale structures in the inertial range for turbulent systems, but does not reliably capture fine-scale structures.
§ DISCUSSION
Despite the successes of numerical simulations of turbulence in reproducing both analytic scaling predictions <cit.> and the observed turbulent density and velocity power spectral scaling relations <cit.>, a number of major practical challenges remain. Critically, the physical resolution of numerical simulations is severely limited, restricting research to a narrow range of scales to study the turbulence inertial range.
Failing to resolve turbulent systems to the smallest relevant scales would, for example, underestimate the amount of energy within the system <cit.> or misdiagnose the power-law slope of the energy spectrum. As different power law slopes are explained by different theoretical frameworks, resolution becomes critical for distinguishing theories of turbulence. Current high-resolution turbulence simulations can achieve Re ≈ 10^5, running on more than 65,000 cores <cit.>, and such simulations remain remote from realistic values of up to Re=10^10 for the turbulent interstellar medium.
Superresolution ML studies may be a promising avenue for achieving high resolution numerical simulations at a fraction of the computational cost. In this study, we have presented a modified MeshFreeFlowNet architecture that integrates the complete set of simulation PDEs and boundary conditions to enhance the resolution of numerical simulations of turbulent RBC. Our approach is an experimental step towards addressing the longstanding challenge of limited resolution in astrophysical turbulence simulations by employing ML techniques for superresolution. We find that the model successfully recovers large-scale information and the inertial range scaling in both convective and highly turbulent RBC systems, with a training set encompassing Ra ranging from Ra=10^6 to 10^10.
We have demonstrated that the superresolution architecture explored here is successful at reproducing the statistics of more turbulent flows, where a rich variety of structures exists at multiple length scales. This is in contrast to laminar
flows, which exhibit a lower degree of complexity. Furthermore, our analysis also indicates that since higher Rayleigh numbers exhibit richer features within fewer free-fall times the CNN is able to capture more relevant information. This finding suggests that this model is particularly well-suited for studying highly turbulent systems and could prove valuable in advancing the achievable resolution of astrophysical turbulence simulations.
In order to assess the model's performance in recreating the full range of spatial frequencies present in the ground truth simulation, we compared the superresolution's predicted power spectrum slopes to the theoretically expected power spectrum slope within the inertial range. Our analysis revealed that both the total energy and temperature power spectrum are well-recovered and agree well with theory within our chosen inertial range of k=3-15 for the turbulent Ra=10^9 and Ra=10^10 simulations. Turbulent systems exhibit mode-mode interactions which creates a hierarchy of interacting phases <cit.>, leading to richer phase structure to train on. We therefore speculate that the procedure performs well on more turbulent systems as the NN can better extract information and fill in at the small scales. These results demonstrate the effectiveness of our modified MeshFreeFlowNet in capturing the essential features of RBC systems for turbulent simulations.
On the other hand, we have observed high-frequency features in low Ra simulations, which result in an artificial energy pile-up, or in the ML community, a “checkerboard effect”. This phenomenon is an unphysical systematic artifact of the chosen architecture and may impact the accuracy of the model in lower Ra regimes. Further investigation into the causes of this energy pile-up and possible mitigation strategies will improve the model's performance across a broader range of Ra values.
Despite the promising results, there are potential avenues for further improvement. Future research could focus on incorporating additional physics constraints for turbulent systems, such as incorporating anisotropy or intermittency, to enhance the model's ability to capture the complex dynamics of turbulent systems. The model validation could also include higher-order statistics <cit.> in addition to our use of the power spectrum in this work.
Additionally, the model could be extended to study other types of astrophysical turbulence, such as magnetohydrodynamic (MHD) turbulence <cit.>, which would require altering the PDEs of the system and retraining on a simulation suite such as the Catalog for Astrophysical Turbulence Simulations (CATS; <cit.>).
Furthermore, we acknowledge the caveats and limitations of this study that need be considered:
* Hyperparameter tuning for γ: The performance of our model is sensitive to the choice of the hyperparameter γ, which may affect the balance between the physics constraints and the neural network's ability to learn complex features. Future work should explore techniques for optimizing γ, such as grid search or Bayesian optimization, together with proper cross validation to improve model performance across a range of Rayleigh numbers.
* Effect of the number of turnover times in the training data: The quantity and quality of the training data, particularly the number of turnover times captured, can have a significant impact on the model's ability to learn the underlying physics of the system. Further research should investigate the relationship between the number of turnover times and the model's performance, with a focus on determining the optimal amount of training data required for accurate turbulence prediction.
* Generalizing to 3D: The current model is limited to 2D simulations of RBC systems. However, astrophysical turbulence phenomena occur in three dimensions, necessitating the extension of the model to 3D simulations. This transition will likely require further modifications to the neural network architecture, as well as additional computational resources to accommodate the increased complexity of 3D systems.
In conclusion, while our modified MeshFreeFlowNet architecture has shown promise for superresolution turbulence studies, addressing the aforementioned caveats and limitations will be crucial for further improving the model's performance and applicability to a broader range of astrophysical systems.
§ CONCLUSIONS
In this work, we have modified the MeshFreeFlowNet architecture to include the full set of simulation PDEs for an RBC system as well as a loss function for the simulation boundary conditions. We study the network's performance on a range of Rayleigh numbers from Ra=10^6-10^10 in the statistically steady regime. We find that:
* The network performs well at all Rayleigh numbers studied here in recovering large-scale features and the power spectrum at small wave numbers.
* We find that our updated architecture performs significantly better at capturing the features and power spectrum in more turbulent numerical setups (i.e., larger Ra systems). This is likely due to the fact that more turbulent systems have a rich variety of structures at many length scales in comparison to laminar or flows.
* For Ra>10^8, the superresolution CNN network used here is overly dissipative at large wavenumbers compared to the direct numerical simulations but still recovers the theoretical slope of the inertial range for both temperature and kinetic energy fields.
D.M.S. is supported by the 2022 Future Investigators in NASA Earth and Space Science and Technology (NASA FINESST) Fellowship (NASA grant 80NSSC22K1604). D.M.S. is also grateful for the generous support of the 2023 Quad Fellowship.
B.B. acknowledge support from NSF grant AST-2009679 and NASA grant No. 80NSSC20K0500.
B.B. is grateful for the generous support of the David and Lucile Packard Foundation and the Alfred P. Sloan Foundation.
The Flatiron Institute is supported by
the Simons Foundation. All authors are grateful for the use of both GPU and CPU resources on the Rusty Supercomputer cluster at the Simons Foundation Flatiron Institute.
aasjournal.bst
|
http://arxiv.org/abs/2307.01391v1
|
20230703230503
|
A New Learning Approach for Noise Reduction
|
[
"Negin Bagherpour",
"bbas Mohammadiyan"
] |
cs.CE
|
[
"cs.CE",
"62R07, 60H50"
] |
Article Title]A New Learning Approach for Noise Reduction
[1]Negin [email protected]
2]Abbas [email protected]
*[1,2]Department of Engineering Sciences, University of Tehran, Tehran, 14155-6619, Iran
Noise is a part of data whether the data is from measurement, experiment or ... A few techniques are suggested for noise reduction to improve the data quality in recent years some of which are based on wavelet, orthogonalization and neural networks. The computational cost of exisiting methods are more than expected and that's why their application in some cases is not beneficial. In this paper, we suggest a low cost techniques based on special linear algebra structures (tridiangonal systems) to improve the signal quality. In this method, we suggest a tridiagonal model for the noise around the most noisy elements. To update the predicted noise, the algorithm is equipped with a learning/feedback approach. The details are described below and based on presented numerical results this algorithm is successful in computing the noise with lower MSE (mean squared error) in computation time specially when the data size is lower than 5000. Our algorithm is used for low- range noise while for high-range noise it is sufficient to use the presented algorithm in hybrid with moving average. The algorithm is implemented in MATLAB 2019b on a computer with Windows 11 having 8GB RAM. It is then tested over many randomly generated experiments. The numerical results confirm the efficiency of presented algorithm in most cases in comparison with exisitng methods.
[
[
9 June 2023
===============
§ INTRODUCTION
Data analysis is a very common problem in machine learning and signal processing. Assume that a quantity X is measured in n different cases and we need to analyze the provided data. Since the data contains some error, whether it obtained by experiments or direct measurement tools, we need to detect and reduce the noise before starting any analysis. Different noise reduction algorithms are suggested with specific applications for audios or images; see for example <cit.>. In recent years wavelet and least squares played an important role in suggested noise reduction algorithms. Chen <cit.> presented an algorithm based on noise orthogonalization. He also outlined a novel noise reduction technique by use of reverse least squares and shaping regularization <cit.>. Moreover, he developed the first wavelet based algorithm for noise reduction in 2017 <cit.>. On the other hand, Huang <cit.> provided a singular spectrum analysis for 3D random noise. A few neural network based algorithms are also suggested for noise detection <cit.>. There are some complications in solving noise detection problem such as difficult mathematical modeling, high computational cost and high sensitivity to noise quality and size. Learning approaches can solve these issues by following the error trend in consequtive iterations. In each iteration of a learning algorithm, the current noise estimation is evaluated to suggest a proper update. In this paper we provide a new algorithm for detecting and reducing the noise which has benefits over existing methods in some cases. Each iteration of this algorithm consists of three main steps:
1) It suggests a tridiagonal model for the signal entries which show more noisy behavior.
2) The noise is approximated by solving the tridiagonal model.
3) It updates the input signal considering the assumed noise.
Our contributions are as follows:
1) We outline a two phase noise reduction algorithm which suggests a tridiagonal model to estimate noise, compute an approximated noise-free signal and check the improvement to verify the quality or revise the noise in each iteration.
2) The hybrid of regression phase besides the learning phase make the noise reduction process faster.
3) The complexity of the proposed algorithm is relatively low.
4) We can substitute the tridiagonal model by any proper matrix structure based on signal characteristics. We categorized some special cases in Section <ref>.
§ OUR ALGORITHM
§.§ Initialization
The Algorithm starts with guessing noise through simple idea of moving average and normal distribution, These midpoints will be used in subsequent calculations. then goes on with detecting the potential noisy elements from ruined Data and trying to reduce the error. Doing it over and over(in the next step) until the stopping criterion gets satisfied, the stopping criterion would be either predefined by the user or automatically detected from the Main Data itself.
§.§ Approximation loop
The algorithm enters a while loop that continues as long as the error E is greater than the specified tolerance (i.e. stopping criterion) and the counter is within the maximum number of iterations that we have predefined.
Inside the loop, the second-order differences of GT are computed, and the maximum value M is determined. This helps identify the elements of interest for the approximation.
The elements from GT that meet the condition abs(DD)-0.7*M>0 are selected and stored in gt. These elements will be used in the approximation calculations.
The length of gt is determined and stored as n.
The approximation vector f is initialized as a zero vector with a length of n.
If n is greater than zero, the algorithm proceeds with the approximation calculations.
Within the for loop, a set of input values In is generated, evenly spaced over a range related to the length n. These input values are used to evaluate the PDF PD1 later on.
The PDF values N are computed by evaluating the PDF PD1 using the input values In.
Intermediate arrays and variables are initialized for subsequent calculations.
Random-based calculations are performed to update the elements of f based on the values of N and gt. These calculations involve random coefficients and the proportionate allocation of the PDF values.
A tridiagonal matrix T is constructed based on the calculated values. The diagonal elements are determined by the updated d array, while the off-diagonal elements are determined by the updated mu and rho arrays.
The linear system T * f = N is solved to obtain the updated approximation vector f.
The error E is updated by calculating the norm of the difference between f and gt.
The counter k is incremented by 1 to keep track of the number of iterations.
§.§ Post-processing and analysis
Once the while loop finished, the measured values GT are updated by replacing the selected elements based on gt with the corresponding elements from f. This reflects the refined approximation.
The measured values GT are plotted against the exact values Gexact to visualize the approximation and assess the quality of the results.
The mean squared errors (mse1 and mse2) between the exact values Gexact and the measured values Gmeasured are computed and stored. These metrics provide a quantitative assessment of the approximation's accuracy.
The algorithm iterates through the approximation loop, adjusting the approximation vector f based on the calculated PDF values and the selected elements from gt. The goal is to refine the approximation and minimize the error between the measured values and the true values. The process continues until the error falls below the specified tolerance or the maximum number of iterations is reached.
§.§ Pseudocode
§ SOME HINTS ABOUT CONVERGENCE
Here, we provide some points about the convergence of LTD algorithm:
* Although we do not have convergence proof for LTD, our numerical results confirm at leats superlinear convergence. See Figure <ref>.
* As described in Section 2, in each iteration of LTD algorithm a low-dimension tridiagonal system is needed to improve the signal quality. To determine the most noisy elements, we suggest to select the entries with second difference greater than 70 percent of its maximum value as a rule of thumb. Based on our observtions it is a proper choice in most of the tests.
* The values of kmax and δ depends on data size. In <ref> proper choices are represented.
§ NUMERICAL RESULTS
In this section, we present the numerical results. We implement LTD and MSSA algorithms in MATLAB 2019b on a computer with a 2.4 GHz core i5 CPU and 8Gb RAM. We then test the codes on real and randomly generated noisy data. To generte random tests, both rand and randn commands are used for the exact data and a normal noise is added. In each experiment, the goal is to capture the added noise as fast as possible. We compare both MSE and time to show the effectiveness of our proposed algorithm in approximating the noise term more precisely and in lower time. In Figure <ref>, the Dolan More time profiles are shown to verify the fastness of LTD.
Moreover, the average time and MSE are reported for random tests. To provide more accurate results we repeat each experiment 20 times and report the average results in Table <ref>. As large as the data size is, MSSA tends to outperform our algorithm; however, for data size not greater than 1000 LTD has two desirable features including lower MSE and lower computing time.
§ CONCLUDING REMARKS
Noise reduction was the target of this paper. The most important contribution was to outline a low computational cost algorithm for detecting small data fluctuations. In our suggested algorithm two phase are introduced: first was to suggest a local tridiagonal model around the most noisy entries to detect the noise and the second was to design a learn/feedback process to decide whether the predicted noise satisfied the necessary quality conditions in next iterations. According to presented numerical results, the presented algorithm was able to detect the small fluctuations with lower men squared error in lower computational time. Working on optimal parallelization techniques can be suggested for future research to denoise large scale data sets.
|
http://arxiv.org/abs/2307.00910v1
|
20230703101433
|
Contextual Prompt Learning for Vision-Language Understanding
|
[
"Koustava Goswami",
"Srikrishna Karanam",
"Joseph K J",
"Prateksha Udhayanan",
"Balaji Vasan Srinivasan"
] |
cs.CV
|
[
"cs.CV",
"cs.AI"
] |
Contextual Prompt Learning for Vision-Language Understanding
Koustava Goswami, Srikrishna Karanam, Joseph K J, Prateksha Udhayanan and Balaji Vasan Srinivasan
Adobe Research, Bangalore India
.7{koustavag,skaranam,josephkj,udhayana,balsrini}@adobe.com,
=======================================================================================================================================================================================================
empty
Recent advances in multimodal learning has resulted in powerful vision-language models, whose representations are generalizable across a variety of downstream tasks. Recently, their generalizability has been further extended by incorporating trainable prompts, borrowed from the natural language processing literature.
While such prompt learning techniques have shown impressive results, we identify that these prompts are trained based on global image features which limits itself in two aspects:
First, by using global features, these prompts could be focusing less on the discriminative foreground image, resulting in poor generalization to various out-of-distribution test cases. Second, existing work weights all prompts equally whereas our intuition is that these prompts are more specific to the type of the image.
We address these issues with as part of our proposed Contextual Prompt Learning (CoPL) framework, capable of aligning the prompts to the localized features of the image.
Our key innovations over earlier works include using local image features as part of the prompt learning process, and more crucially, learning to weight these prompts based on local features that are appropriate for the task at hand. This gives us dynamic prompts that are both aligned to local image features as well as aware of local contextual relationships. Our extensive set of experiments on a variety of standard and few-shot datasets show that our method produces substantially improved performance when compared to the current state of the art methods. We also demonstrate both few-shot and out-of-distribution performance to establish the utility of learning dynamic prompts that are aligned to local image features.
§ INTRODUCTION
Fully supervised computer vision models for problems like classification are typically trained on datasets like ImageNet <cit.>, OpenImages <cit.>, JFT300M <cit.> etc., and have also proven themselves to be effective for a variety of downstream tasks via transfer learning <cit.>. Despite this, it is challenging to adapt these models to other domains due to various reasons including limited data, annotation constraints etc. Additionally, since these models are trained for specific objectives like classification, they tend to capture concepts related to categories seen during training and not to scale to unseen classes during inference.
To address the finetuning issue above, there has been recent effort in tuning the associated prompts (instead of the model weights). Inspired by traditional prompt engineering efforts <cit.> , there has been some work in tuning discrete prompts from predefined prompt templates <cit.> that help in capturing rich semantics from user intents and align them to visual contents.
However, since building a rich semantic based prompt templates require domain specific and linguistic knowledge, it is tedious and time consuming job to perform.
The CoOp <cit.> algorithm used ideas from soft-prompting in natural language processing <cit.> to train dynamic learnable prompt vectors with backpropagation and preserve the semantic relationship between sentences and labels <cit.>. However, the context lerned with CoOp in this fashion fails to generalize to unseen classes, leading to the need for dynamically updating prompts based on the image context, an idea that was proposed in CoCoOp <cit.>. This model was trained by explicitly conditioning the prompts on image feature vectors as tokens where a separate lightweight neural network (called meta-net in their work) was used to equally weight all the prompt vectors.
In our work, we identify two key issues with the aforementioned architecture. First, the features obtained from meta-net in CoCoOp <cit.> are global in nature and hence susceptible to issues like clutter and noise in many few-shot and out-of-distribution test cases (we demonstrate this empirically later on). Next, these features are directly added to all the learned prompt vectors, thus resulting in an equal weighting for each prompt. Consequently, this model is unable to learn which of the prompt vectors are more semantically relevant and contextually meaningful during inference, which makes model less generalizable on unseen classes in a zero-shot settings.
To address the aforementioned problems, we propose a new technique called Contextual Prompt Learning (CoPL). Our key ideas include aligning prompts to local image context, realized with local features, and determining which prompts are more sematically relevant conditioned on such local context. Our insight is that by doing so, we are able to learn a more appropriate weighting of the prompts, unlike equal weighting of prior work above, that is semantically reflective of the actual content of the image under consideration. Inspired by the concept of global attention from the NLP literature <cit.>, during training, we propose to align each local feature vector (e.g., computed from a local image patch) to a set of dynamic soft-prompts using a learned context vector that attends to these prompt vectors. An overview of the methdology is shown in Figure <ref>.
This produces a set of attention weights for the prompt vectors that are semantically aligned to local image regions. This results in CoPL learning more generalizable features as we demonstrate with an extensive set of zero- and few-shot classification evaluation results.
We conduct a comprehensive set of experiments on visual classification on 11 different datasets (see Section <ref>
and scenarios (zero-shot, one-shot, seen/unseen, and within-dataset and cross-dataset). Across all these experiments, we demonstrate substantial performance improvement when compared to our closest baselines CoOp <cit.> and CoCoOp <cit.>, indicating the ability of our method to be adapted across various classification settings with little or no training and much reduced prompt engineering.
Our key contributions are summarized below:
* We identify two key issues with existing prompt-based image classification methods: equal weighting of the prompt vectors and no flow of contextual local information of input images to the prompt vectors while learning during back-propagation.
* We propose CoPL: Contextualized Prompt Learning, a new method that addresses the issues above by learning prompt weights dynamically and aligning the resulting prompt vectors with local image features.
* We conduct extensive experiments with CoPL under a variety of classification scenarios and demonstrate substantial performance improvements, in particular in various unseen and few-shot data scenarios. Very interesting to see that, on 11 different image recognition dataset, on average CoPL achieves state-of-the-art performance on unseen class classifications beating state-of-the-art zeros-hot large-scale model CLIP by 1.4% and conditional prompting model CoCoOp by 3.9% on accuracy. We evaluated CoPL on cross-dataset zero-shot image recognition tasks and on an average on 8 datasets, CoPL outperformed CoCoOp by 2.3% on accuracy.
§ RELATED WORK
Multimodal Models
Vision-language models have shown great potential in learning generic visual representations. The core idea has been to use natural language supervision for image representation learning and align them jointly in the same embedding space <cit.>. A vision-language model can be seen as a composition of three parts - a text encoder, an image encoder and a learning methodology that leverages information from both these modalities. Early explorations in this line of research included the problem formulations - metric learning <cit.>, multilabel classification <cit.>, n-gram language learning <cit.>, captioning <cit.>. Traditionally, hand-crafted descriptors <cit.> were the mode of capturing image representations. With the introduction deep neural nets, researchers used convolutional neural net based architectures <cit.> to image representations where as the texts are encoded by simply taking TF-IDF features of the words. With the introduction of pre-trained word vectors the representations learnt by capturing semantic relativity of the words improved <cit.>.
Recent works have focused on learning joint representations of both the modalities using deep learning architectures <cit.>. As mentioned in <cit.>, there have been significant developments in the areas of transformers<cit.>, and contrastive representation learning <cit.>. Li et al., <cit.> proposed VISUALBERT, where texts and images are jointly encoded in a single transformers architecture. The core idea was to align both the modalities inside the transformers architecture by leveraging the cross-attention mapping. The text embddings are obtained from the BERT language model <cit.> and the image embeddings are obtained from a carefully designed image encoder.
One of the biggest release in the multimodal space is the CLIP model <cit.>. It is a dual encoder based model and during training it matches pairs of images and texts. It is trained on 400 million training dataset and achieved state-of-the-art performance on different zero-shot downstream tasks. One of the main component of CLIP is the carefully designed prompts which most of the times very hard to formulate. To overcome this, Zhou et al., <cit.> designed CoOp, which trains dynamic soft-prompts during back-propagation. They keep the underneath multimodal models fixed and only trains the prompt tokens. CoOp has produced impressive results on different image recognition tasks.
Prompting The intuition behind prompt learning is to capture user intention and instructions to perform certain downstream tasks. In natural language processing literature, capturing instructions to perform various downstream tasks is well established <cit.>. With the introduction of GPT <cit.>, prompt engineering is shown to be performing efficiently in few-shot knowledge adaptation. Recently Liu et al., <cit.> designed a template-based question answering model to perform downstream named entity recognition tasks. In their work, they referred to the classical template prompting setup where the templates consist of questions as prompts What is the [E]?. In a probabilistic setup, the model predicts the [E] token from the distribution of classes which completes the sentence. For example: if the class is location, then the model predicts the token and fills in the [E] to complete the sentence as What is the location?. The key problem behind building prompt templates is to identify the format and linguistic relations with the classes within the dataset. Thus, it requires high skills and knowledge to come up with good prompt templates <cit.>. Jiang et al., <cit.> proposed candidate prompts can be generated using text mining, relying on the knowledge of the language model. Deng et al., <cit.> came up with solution to optimize the discrete prompts using reinforcement learning. The architecture consists of parameter-efficient policy network which generates the discrete prompts after optimizing. They have implemented effective reward stabilization process to tune the rewards efficiently using language models.
Recently, researchers have proposed soft-prompting which overcome the shortcomings of the template based prompting including the carefully prompt designing and engineering. The main intuition is to learn dynamic continuous prompt tokens during back-propagation <cit.>. During optimization, the weights of the language models are fixed. During training, the models predict the classes, carefully mapped from the vocabulary called verbalizers. Recently Goswami et al., <cit.> in their work highlights the soft prompts can be further tuned with the semantic knowledge of the language models for the different domain adaptation without explicitly verbalizers setup.
Fine-tuning Before the adaptation to prompt based learning, the standard process was to fine-tune the existing pretrained models on the downstream vision tasks <cit.>. Gao et al., proposed CLIP-Adapter <cit.> to improve efficiency of the CLIP by applying a residual transformation layer over the feature embedding or classifier weight. Wortsman et al., <cit.> proposed WiSE-FT which fine-tunes the layers as a post-ensemble method to improve the efficacy of CLIP.
§ METHODOLOGY
Here, we introduce and discuss details of our proposed method we call CoPL: Contextualized Prompt Learning. Since CoPL uses the same architectural backbone as CLIP <cit.>, we first begin with a brief review followed by a discussion on how our closest baseline, CoCoOp <cit.>, is trained. This will help set the stage for motivating and discussing our method which we do subsequently. While we use the CLIP backbone for simplicity, we are in no way limited by this design choice. In fact, our method is very much applicable to be used in conjunction with a variety of other architectures, e.g., VisualBERT <cit.>, MDETR <cit.>, GLIP <cit.> etc.
§.§ Review of CLIP
CLIP <cit.> is trained using the standard contrastive learning setup where there is a text encoder and an image encoder and the overall objective function is to get their outputs as close as possible in the joint space. The text encoder is implemented with the Transformer model <cit.> which takes word sequences as input and produces both individual sequence-level as well as overall sentence-level representations. The image encoder is implemented with the ViT architecture <cit.> which produces local (at the patch level) as well as global image features. The contrastive loss function is designed to capture the similarity between the relevant text and images, that is, the cosine similarity between the related text and image will be maximized whereas the cosine similarity with all other unrelated pairs will be minimized. To be precise, for a K-class classification problem, the discrete prompts are designed to have one class token. The training objective is then to fill the token with the i-th class name where w_i is the weight vector generated by the text-encoder for the same.
p(y|x) = exp(sim(x,w_y))/γ/∑_i=1^kexp(sim(x,w_i))/γ
where x is the image feature vector produced by the image encoder and γ is a temperature parameter.
§.§ Review of CoCoOp
CoCoOp: Conditional Context Optimization is built on top of a previously published Context Optimization (CoOp) <cit.> algorithm. It is observed that the choice of prompts plays a major role in vision-language understanding <cit.>. However, finding the right match between prompts and description of the image often takes a significant amount of time <cit.> and sometimes can change the model results drastically. For instance, adding task-specific manual prompts in the Flowers102 dataset <cit.> would include the word flowers, whereas for the DTD dataset <cit.> the word should be texture. In both cases, adding these words in prompts requires manual intervention and careful observation. To overcome such a need for prompt engineering, the CoOp model <cit.> trains a set of continuous vectors {𝐯_1,𝐯_2,…,𝐯_M} as context tokens during back-propagation. These vectors are represented as word vectors instead of discrete predefined prompts. Specifically, the prompts for the i-th class, denoted by t_i can be represented as [𝐯_1,𝐯_2,…,𝐯_M,𝐜_i], where M is the number of learnable prompt vectors and 𝐜_i is the class embeddings. The CoCoOp <cit.> algorithm improves CoOp's <cit.> performance learning to generate prompts conditioned on each image instance, i.e., these now change for each image unlike in CoOp where they are fixed. CoCoOp does this by training an additional two-layer neural network called meta-net that takes an image feature vector as input and produces a conditional vector that is combined with the prompt vectors to generate the final image-dependent prompts.
This is done as follows:
𝐯_m(𝐱) = 𝐯_m + π
π = h_θ(𝐱)
where h_θ refers to the meta-net and 𝐱 the image feature vector.
§.§ CoPL: Contextual Prompt Learning
While CoCoOp generalizes better than CoOp for unseen-class classification, many issues remain. First, since CoCoOp uses global feature vectors for learning the updated prompts, it focuses less on the discriminative regions in images (which tend to be more local). Next, the addition operation performed in Equation <ref> does not capture the individual importance of each prompt token. This is particularly important since certain discriminative regions in images may weight certain prompts more when compared to others and this is not captured in the CoCoOp model. To address these issues, we propose CoPL, a simple and intuitive algorithm that operates at the local feature granularity while also aligning them with prompts to learn prompt importance weights.
Given an image 𝐈, we first compute a set of local feature vectors. For instance, this can be the output of a vision transformer model that generates patch embeddings. Let 𝐬∈ℝ^P× B × d (where P is number of patches from image, B is training batch size, and d is feature dimesionality) be this. Conditioned on these local features, we determine the semantically most meaningful prompts. To do this, we take inspiration from the attention work of Luong et al. <cit.> and generate context representations that explicitly consider both the learnable prompt tokens as well as the patch representations. We first learn a lightweight neural network to generate a conditional token for the representation of each patch in 𝐬 as:
𝐬_p = h_θ(𝐬_p)
where p∈{1, 2, …, P}. This makes the architecture parameter-efficient and easily differentiable during back-propagation. To generate the context representations that can be used to update the prompt tokens, we first learn a variable length alignment vector 𝐚_p (𝐚∈ℝ^B× M × d), one for every patch p that attends to each prompt tokens 𝐯_i and compares them to the corresponding image patch representation 𝐬_p as:
𝐚_p = align (𝐬_p,𝐯_i)
= exp(score(𝐬_p,𝐯_i))/∑_i=1^M(score(𝐬_p,𝐯_i))
where i ∈1,2,…,M and M is the number of prompt tokens. In our setup, refers to the content function and is implemented as:
score(𝐬_p,𝐯_i) = tanh(𝐖_a[𝐬_p;𝐯_i])
where W_a is the weight vector.
Finally, the per-patch context representation 𝐜_p is calculated as the weighted sum over all prompt tokens as:
𝐜_p = ∑_i=1^M𝐚_pi𝐯_i
The final prompt tokens are now obtained by conditioning on the context vectors above as:
𝐯_m(𝐱) = ∑_i=1^P𝐯_m+𝐜_i
In a nutshell CoPL calculates the prompts for the i-th class as t_i =[v_1(x),v_2(x),...,v_M(x),cl_i], where cl_i is the i-th class. The prediction probability is calculated as
p(y|x) = exp(sim(x,g(t_y(x))/γ/∑_i=1^kexp(sim(x,g(t_i(x))/γ
where g() is the feature vector produced by the text encoder. During our entire pipeline, the pre-trained CLIP model is fixed.
§ EXPERIMENTAL EVALUATION
Our key insight in CoPL is that we are able to learn generalizable representations by focusing more on the local representations of the image, along with deferentially weighting the learned prompt embeddings. To bring out the efficacy of this, we evaluate our method on multiple settings:
(i) knowledge transfer to unseen classes from seen classes within the same dataset, (ii) zero-shot performance across multiple datasets, (iii) efficiency on unseen classes while trained with 1-shot training set within the same dataset.
Datasets We follow Zhou et al. <cit.> to evaluate our model on 11 image classification dataset of varying complexity. The datasets include: generic classification datasets like ImageNet <cit.> and Caltech-101 <cit.>; curated fine-grained datasets like OxfordPets <cit.>, StanfordCars <cit.>, Flowers102 <cit.>, Food101 <cit.> and FGVCAircraft <cit.>; scene, action, texture and satellite image recognition datasets from SUN397 <cit.>, UCF101 <cit.>, DTD <cit.> and EuroSat <cit.> respectively.
For the few-shot experiments, we follow Zhou et al. <cit.> to randomly sample datapoints for training. The models are evaluated on the entire test set to report the accuracy. For setting (i) and (ii) (inter-dataset and intra-dataset experiments), we use 16-shots, consistent with the baseline methods <cit.>. We use the standard classification accuracy as the evalution metric across all experiments.
Training Details
We build our method on top of the publicly available implementation of CoCoOp <cit.>.
For the CLIP architecture, we use ViT-B/16 as image encoder. For all our experiments, we use prompt token length of 4. All our models are trained with a batch size 1 for 10 epochs on a single GPU system. Our models are trained on a 15 GB Tesla T4 GPU. We used the standard stochastic gradient descent (SGD) <cit.> as our optimizer. Our starting learning rate is 0.002 and we use cosine learning rate scheduler. Our warm-up with a constant learning rate is 0.00001.
Baseline Models We compare our approach with the current state-of-the-art method CoCoOp <cit.>, CoOp <cit.>and also large-scale zero-shot methodology CLIP <cit.>.
While comparing with CLIP, we indeed compare our learned prompt embeddings with manually designed prompts. We closely follow most of our experimental setting with that of Zhou et al., <cit.>.
§.§ Knowledge Transfer to Unseen Classes
One of the main issues of CoOp is that it is unable to generalize to the unseen classes while being trained on the base classes. In CoCoOp, the authors claim to bridge the gap by learning prompt conditioned on the global features of the image. But they miss out on capturing the local features of the image and do not give weightage to the semantically aligned learnable prompts, which is the key focus area of our work. Following the implementation of Zhou et al., <cit.>, we conducted our experiments on the above mentioned 11 datasets both for seen and unseen classes. While training is only conducted on the base classes, during testing we transfer the learnt knowledge to classify unseen classes as well as seen classes.
From Table <ref>, we can see that across datasets, there is a decrease in performance for CoOp methodology over the zero-shot large-scale method CLIP. Though there is an improvement for the seen classes, the failure across unseen classes makes the model less effective compared to CLIP. Though introduction of conditional prompting improved the performance of CoCoOp, in most of the cases it fails to generalize to unseen classes with the dynamically learnt prompts (in 7 out of 11 datasets CLIP has outperformed CoCoOp methodology). This raises the question of how to effectively learn the prompts which are more generalizable and align with the input image features?
Interestingly, our proposed approach CoPL, improves the accuracy of unseen classes over CoCoOp and CoOp for 10 different tasks. Moreover, the harmonic mean for all these 10 tasks are greater than the current state-of-the-art model, which specifies that the learned prompts are more generalizable across domains and tasks. It is important to note that CoPL comfortably outperforms manual prompt based methodology CLIP on 5 different tasks. With the relatively complex dataset like EuroSAT, where localization is hard to be aligned due to the nature of satellite imagery, for the seen classes CoPL outperformed CoCoOp by 1.8% and CLIP by 32.8% on accuracy.
In Figure <ref>, we further analyse the absolute performance gain on the unseen classes over CoOp and CoCoOp methodologies. Interestingly, when compared with CoOp, for the dataset UFC101, CoPL improve the accuracy by 20.6%, enhancing the capability of the dynamic prompt vectors to learn semantic relatedness conditioned on the local image features. For the FGVCAircraft dataset, the absolute performance gain of 9.0% and 7.6% for CoOp and CoCoOp respectively highlights the efficacy of the model. Performance boost of 8.9% on the DTD dataset, confirms the capability of COPL in identifying different textures, opening up the possibility to deploy the model for design understanding. The average performance gain of 2.7% on the FGVCAircraft dataset over CLIP, defines the ability of CoPL to transfer the learnt knowledge to the relatively complex unseen classes.
In general, while considering both seen and unseen dataset, CoPL outperforms CoCoOp by 2.7% in accuracy. At the same time, it achieves a gain of 6.8% on accuracy over manual prompt base methodology CLIP, suggesting that conditional alignment of prompts to local features of the image can generalize the model better to diverse recognition tasks. Ideally, the manually designed prompts should have been well aligned with the images, thus CLIP can be considered as the best performing model. It is interesting to note that CoPL, on an average across all 11 datasets, achieves state-of-the-art performance by outperforming CLIP by 1.4% in accuracy. This justifies our argument of making dynamic prompt vectors more contextualized like human annotated prompts by observing the different local features of the image.
§.§ Inter Dataset Zero-shot Performance
Here, we test the mettle of our approach on an even more challenging setting. We learn the model on a dataset, and evaluate the model on a different dataset to see how transferable the learned representations are.
Concretely, we train the model on
Caltech101 dataset and evalute its performance on the rest of the datasets.
Observe, in Table <ref>, CoPL outperforms CoCoOp across most of the datasets. For complex dataset like FGVCAircraft, CoPL gains 2.9% more accuracy without being explicitly trained on this dataset. As the Caltech101 is a general purpose object classification dataset, we observe impressive performance on OxfordPets and Food101 dataset. Interestingly, though trained on object classification dataset, CoPL is able to transfer the knowledge to do texture recognition task on DTD dataset, outperforming CoCoOp by 2.5% on accuracy. In the most challenging EuroSAT dataset, which has very different image sets (consists of images taken from satellite), CoPL achieves 58.1% accuracy which is comparable to the performance of the CoCoOp. These highlights that CoPL able to learn the semantics of the images by identifying the localized features and align it with the learnable textual prompts.
§.§ One-shot Training
Next, we evaluate our approach in an extremely low data regime.
Here, we train CoPL and CoCoOp with 1 training instance per class for each of the image recognition task and test the accuracy on both seen and unseen classes within the dataset. In our experiments, we valuate across 7 datasets, and discuss their performance on the seen and unseen classes next.
Seen Class In Figure <ref>, we observe that for most of the datasets, CoPL outperforms CoCoOp indicating the capacity of adapting to the in-training datasets. It is interesting to see that, for FGVCAircraft dataset, CoPL outperforms CoCoOp by large margin (25.9% on accuracy).
Unseen Class The performance of the CoPL on unseen classes while only trained on 1-shot seen classes is showcased in Figure <ref>. CoPL outperforms CoCoOp on 6 datasets. It is interesting to observe that, on FGVCAircraft dataset CoPL improves the accuracy from 4.9% to 28.5%.
These results indicates that, the alignment of local image features to prompts helps to capture the semantic meaning of the images while only trained with one training instance per class. This makes the model easily adaptable to diverse set of image recognition task, even for low-resource scenarios.
§ DISCUSSIONS AND ANALYSIS
Local vs Global Image Features A key contribution of our research work is to utilize the localized image features for prompt learning.
Further, we give weightage to the prompts based on the semantic relatedness.
In this section, we critically analyse the contribution of using local features an opposed to global features.
To evaluate this, we have conduct experiments on Caltech101 and DTD datasets.
We selected these two dataset as they are primarily targeted for two different
recognition tasks: object recognition and texture recognition.
Similar to our earlier evaluation protocol, we evaluate the performance on the seen classes and unseen classes separately.
From Table <ref>, we can can clearly understand that for both seen and unseen classes, aligning local contextual features with prompts, helps CoPL to generalize better.
Incremental Test Following the experimental analysis presented in CoCoOp <cit.>, we evaluate the model efficacy where test dataset consists of both seen and unseen classes. In this case, during training the model weights are updated based on the seen classes but has not considered the unseen classes. Thus, during testing model will be performing zero-shot classification on unseen classes.
We observe in Table <ref>, that CoPL comfortably outperforms all the baseline models, suggesting that local image feature alignment with prompts help to generalize across all the classes within a dataset.
Run-time Analysis
We experimentally analyse and quantify the extra wall-clock time that is required for CoPL when compared to the closest baseline CoCoOp. We use Caltech101 for this experiment.
For training 800 data-points (corresponding to 50 classes) , CoPL took 28 minutes whereas evaluation on 1549 data-points took 2 minutes 05 seconds. On the other hand CoCoOp takes 21 minutes 35 seconds for training and 1 minute 34 second for inference.
From this, we clearly understand that CoPL significantly improves generalization without sacrificing too much on the computational overhead.
§ LIMITATION
Our exhaustive experimental analysis across 11 datasets greatly helped us to test the mettle of our method. Our performance on the EuroSat dataset in the unseen class category, is lower than CoCoOp. While analysing the reasons for the drop in performance, we could uncover that images from EuroSat does not contain images with salient objects.
We visualise some such examples in Fig. <ref>; it can be seen that there are no “local” regions in these images, which can contribute towards modelling the prompt better.
This results in lower performance of CoPL on such datasets.
§ CONCLUSION
In this paper, we present CoPL: Contextual Prompt Learning which can align prompts to the corresponding contextual local image features. During alignment, we also produces a set of attention weights for the prompt vectors that are semantically related to local image regions. Extensive experimental evaluation on 11 image recognition datasets showcases the efficacy of the methodology in understanding the semantic relationship between the images and the prompts. Moreover, the state-of-the-art zero-shot and few-shot results justify our claim of making CoPL better in generalization by aligning the local features of the images to prompts. The capability of outperforming CLIP in different settings justifies our argument of making dynamic prompt vectors comparable to human-annotated prompts, which opens up the floor to adapt the methodology for the various downstream tasks. In future, we plan to make COPL capable of understanding user intents to make local edits on the images.
ieee_fullname
|
http://arxiv.org/abs/2307.01074v1
|
20230703145441
|
Spectral convergence of the Dirac operator on typical hyperbolic surfaces of high genus
|
[
"Laura Monk",
"Rares Stan"
] |
math.SP
|
[
"math.SP"
] |
]
Spectral convergence of the Dirac operator
on typical hyperbolic surfaces of high genus
[1]School of Mathematics, University of Bristol, Bristol BS8 1UG, U.K.
[2]Institute of Mathematics of the Romanian Academy, Bucharest, Romania
[email protected]
[email protected]
namedefsubjclassname@2020 Mathematics Subject Classification
[2020]58J50, 32G15
In this article, we study the Dirac spectrum of typical hyperbolic surfaces of finite area,
equipped with a nontrivial spin structure (so that the Dirac spectrum is discrete). For random
Weil–Petersson surfaces of large genus g with o(√(g)) cusps, we prove convergence of the
spectral density to the spectral density of the hyperbolic plane, with quantitative error
estimates. This result implies upper bounds on spectral counting functions and multiplicities, as
well as a uniform Weyl law, true for typical hyperbolic surfaces equipped with any nontrivial spin
structure.
[
Laura Monk1 Rareş Stan2
August 1, 2023
===========================
§ INTRODUCTION
§.§ Setting and motivation
The objective of this article is to provide information on the Dirac spectrum of typical hyperbolic
surfaces of genus g with k cusps, where g, k are non-negative integers such that 2g-2+k>0.
In order to do so, we equip the moduli space of hyperbolic surfaces of signature (g, k) with the
Weil–Petersson probability measure ℙ_g,k. This is a natural model to study typical
hyperbolic surfaces, as illustrated by the rich literature that has developed in the last few years
<cit.>. By
typical, we mean that we wish to prove properties true with probability going to one in a
certain asymptotic regime.
An example of interesting regime is the large-scale regime, i.e. the situation when g
and/or k go to infinity. Indeed, by the Gauss–Bonnet formula, the area of any hyperbolic surface
of signature (g,k) is 2π(2g-2+k). Since g and k are two independent parameters, we can a
priori expect typical surfaces of large genus or large number of cusps to exhibit different
geometric and spectral properties. This has been confirmed by the recent complementary works of Hide
<cit.> and Shen–Wu <cit.>, which prove very different behaviours for the Laplacian
spectrum depending on whether k ≪√(g) or k ≫√(g).
In this article, we fix a sequence of non-negative integers (k(g))_g ≥ 2 such that
k(g) = o(√(g)), i.e. k(g)/√(g)→ 0 as g → + ∞. This setting
is the genus-dominated regime, and we leave the cusp-dominated regime k(g) ≫√(g) to further
work. Under this hypothesis, the second author <cit.> and Le Masson–Sahlsten
<cit.> have proven that there exists a set 𝒜_g,k(g) of “good
hyperbolic surfaces” of signature (g,k(g)) such that:
* the Weil–Petersson probability of 𝒜_g,k(g) goes to one as
g → + ∞;
* the systole of any X ∈𝒜_g,k(g) is bounded below by g^- 1/24√(log(g));
* elements X ∈𝒜_g,k(g) are close to the hyperbolic plane in the sense of
Benjamini–Schramm, and more precisely, the proportion of points on X of injectivity radius
smaller than 1/6log(g) is at most g^-1/3.
We prove upper bounds and asymptotics for the Dirac spectrum of hyperbolic surfaces in
𝒜_g,k(g), which we shall now present.
§.§ Spectral convergence of the Dirac operator
For a hyperbolic surface X of signature (g,k), a spin structure ε on X, we denote
as the Dirac operator on (X, ε).
While the Laplacian spectrum of a hyperbolic surface with cusps always contains essential spectrum,
equal to [1/4, + ∞), for any X, one can pick ε so that the spectrum of the
Dirac operator is discrete <cit.>. We call such spin structures nontrivial. In
that case, for 0 ≤ a ≤ b, we denote as N^_(X,ε)(a,b) the number of
eigenvalues of the absolute value of the Dirac operator , once all rigid multiplicities
are removed (see Section <ref>).
Our main result is the spectral convergence of the Dirac operator on (X,ε) to the Dirac
operator on the hyperbolic plane , true for any typical hyperbolic surface X of high genus g
with o(√(g)) cusps, and any nontrivial spin structure ε on X.
Let (k(g))_g ≥ 2 be a sequence of non-negative integers such that
k(g) = o(√(g)) as g → + ∞. There exists a constant
C>0 such that, for any 0≤ a ≤ b, any g ≥ 2, any
X ∈𝒜_g,k(g), and any nontrivial spin structure ε
on X, we have
N^_(X,ε) (a,b)/ (X)
=
1/4π∫_a^b (π) + R(X,ε,a,b),
where the remainder satisfies:
- C b+1/√(log g)≤
R(X,ε,a,b)
≤
C b + 1/√(log g)( 1+ √(log( 2 + (b - a) √(log (g))))).
A similar statement holds for the Laplacian spectrum, by work of the first author <cit.> in the
compact case and Le Masson–Sahlsten <cit.> when k(g) ≪√(g). These results
have further been extended to twisted Laplacians by Gong <cit.> very recently. The proof is
similar to the proof in <cit.>, replacing the Selberg trace formula by a Dirac version from the
second author <cit.>.
Note that the support of the limiting measure is [0, + ∞), because the spectrum of on
is [0, + ∞). This contrasts with the (twisted) Laplacian setting, where the limiting
spectral density is
1/4 πtanh( π√( - 1/4)) 1_[1/4, +
∞)(), supported on [1/4, + ∞).
A remarkable aspect of Theorem <ref> is that the limit we obtain is
independent of the nontrivial spin structure ε. The reason for that is that the
probabilistic assumption we make, and more precisely the Benjamini–Schramm hypothesis, makes the
geometric term of trace formulae subdominant, i.e. the spectra of X converge to the spectra of
, regardless of the precise geometry and spin structure on X.
§.§ Upper bounds and pathological surfaces
In the process of proving Theorem <ref>, we prove the following upper
bound on the Dirac spectrum of typical hyperbolic surfaces. Throughout this article, when we write
A = 𝒪(B), we mean that there exists a constant C>0 such that, for any choice of
parameters, |A| ≤ C B. We precise that the constant is allowed to depend on our choice of a
fixed sequence (k(g))_g ≥ 2. If the constant depends on a parameter p, e.g. the genus g,
we rather write A = 𝒪_p(B).
With the notations of Theorem <ref>,
N^|D|_(X,ε) (a,b)/ (X)
= 𝒪( (b+1)(b-a + 1/√(log g))).
Building on results of Bär on pinched surfaces <cit.>, we prove that such a bound cannot be
obtained for every spin hyperbolic surface, because there exists “pathological” examples
for which the Dirac operator is discrete with arbitrarily many eigenvalues close to 0.
Let (g,k) be integers such that 2g-2+k>0 and g ≥ 1. For any N, any η >0, there
exists a hyperbolic surface X of signature (g,k) and a nontrivial spin structure ε
on X such that N^_(X,ε)(0,η) ≥ N.
This is another interesting difference between Laplacian and Dirac spectra. Indeed, the Laplacian
spectrum restricted to [0,1/4] is discrete, and the number of eigenvalues under 1/4 is at most
2g-2+k by work of Otal–Rosas <cit.>. This is a topological bound, in the sense
that it only depends on the topology of the hyperbolic surface. Proposition <ref>
proves that such a bound cannot exist in the Dirac setting, while Proposition
<ref> provides one true for any typical hyperbolic surface.
§.§ Applications
We deduce from Theorem <ref> a uniform version of the Weyl law for Dirac
operators on typical hyperbolic surfaces (uniform in the sense that the rate of convergence is
independent of the surface X ∈𝒜_g,k(g) and the nontrivial spin structure
ε).
For any g ≥ 2, any X ∈𝒜_g,k(g), any nontrivial spin structure ε
on X,
∀≥ 1, N^_(X,ε) (0,)/ (X)
=
^2/8π
+ 𝒪_g ( √(log ())).
The implied constant above only depends on the genus g, in a way that can be made explicit using
Theorem <ref>.
Taking a shrinking interval of size 1/√(log(g)) above , we deduce from Proposition
<ref> the following topological bound on the multiplicity
mult_(X,ε)() of any Dirac eigenvalue .
For any g ≥ 2, any X ∈𝒜_g,k(g), any ≥ 0,
mult_(X,ε)()/(X)
= 𝒪(+1/√(log(g))).
§.§ Acknowledgements
The authors would like to thank Sergiu Moroianu for valuable discussion and comments. This research
was funded by the EPSRC grant EP/W007010/1 “Spectral statistics for random hyperbolic
surfaces”. The second author was partially supported from the project PN-III-P4-ID-PCE-2020-0794,
financed by UEFISCDI.
§ PRELIMINARIES
§.§ Spin structures and the Dirac operator
In this subsection we briefly describe spin structures and introduce the Dirac
operator. For more details, we refer the reader to <cit.> and
<cit.>.
Let n be an even integer, and let us denote by _n the Clifford algebra associated to
with the standard scalar product. The subgroup (n) ⊂_n consists of all even
products of unit vectors. It can be shown that (n) is a connected, two-sheeted covering of
(n), the group of n× n matrices of determinant 1. Consider {e_1,...,e_n} the
standard basis in , and denote by J the standard almost complex structure. The
representation:
cl (v) : = 1/√(2)(v-iJv)∧ (·) - 1/√(2)(v+iJv)⌟ (·),
where ⌟ is obtained form the -bilinear extension of the standard
scalar product, acts on Σ_n := ∧^*W, where W ⊂^n is
generated by
{1/√(2)(e_1-ie_2),...,1/√(2)(e_n-1-ie_n)
}. One can check that this representation extends to the complexified Clifford
algebra
cl:_n⊗_⟶_ (Σ_n).
Consider X a n-dimensional oriented manifold with a Riemannian metric. A
spin structure on X is a principal (n) bundle P_ (n)X
covering P_ (n)X, the principal bundle of oriented orthonormal frames,
with two sheets. Moreover, this covering must be compatible with the group
covering (n) ⟶(n). Once we have a fixed spin structure,
we can define the spinor bundle as the associated vector bundle
S:=P_(n) X×_clΣ_n. The Dirac operator is defined as
follows:
:C^∞(S) ⟶ C^∞ (S), := cl ∘∇,
where ∇ is the connection on S induced by the Levi-Civita connection on
X. One can see that is an elliptic, self-adjoint differential
operator of order 1.
It is known that orientable, complete hyperbolic surfaces of finite area always admit spin
structures. From now on, we restrict our attention to this type of surfaces. Let
X = ∖, where is a subgroup of without elliptic elements for which the
associated surface is of finite area. We denote as π the standard projection
π : _2()⟶. We can reinterpret a spin structure on X as a left
splitting in the following short exact sequence:
1 ⟶{± 1 }⟶
:= π^-1() ⟶⟶ 1,
i.e. a morphism χ:⟶{± 1} for which
χ∘ι = 𝕀_{± 1 }, where ι is the natural inclusion.
We define ε:⟶{± 1 } by setting
ε() := χ(), where ∈_2() is the unique lift
with positive trace of ∈. This function is a class function, i.e. constant along
conjugacy classes. More details about the class function associated with a spin structure can be
found in <cit.>. Further detail is also provided on the identifications between the
frame bundle P_ (2) and the group of isometries , and between the spin bundle
P_(2) and the group _2(). If we consider ρ, a right splitting in the
above short exact sequence (which is uniquely determined by the left splitting χ), we can
define the action of a ∈ on P_(2) by left multiplication with ρ(). It
can be easily seen that this action descends to the spinor bundle S.
§.§ The spectrum of Dirac operators
The aim of this article is to study the spectrum of the Dirac operator acting on
a typical hyperbolic surface X of finite area.
§.§.§ Cusps and nontrivial spin structures
The spectrum of the Dirac operator is always discrete when X is
compact. Remarkably, when X admits some cusps, the spectrum can either be
discrete or the real line , depending on the spin structure. More
precisely, Bär showed in <cit.> that the spectrum of the Dirac
operator is discrete if and only if the spin structure is nontrivial along each
cusp of X. It is shown in <cit.> that this is equivalent to
assuming that ε() = -1 for any primitive parabolic element
∈.
Note that Bär further proved in <cit.> that any finite area
hyperbolic surface admits at least one spin structure such that the spectrum is
discrete.
§.§.§ Multiplicities and counting functions
As explained in <cit.>, the spectrum of the Dirac operator admits two rigid sources
of multiplicity: the chiral symmetry and the time-reversal symmetry. It follows that
the spectrum of is symmetric about 0 (i.e. if ∈ is an eigenvalue then
- is an eigenvalue), and every eigenvalue has even multiplicity. We conclude that the
multiplicity of every eigenvalue of is a multiple of 4.
In order to avoid counting every eigenvalue exactly four times, we shall study the reduced
spectrum (_j)_j ≥ 0, where we let _j := Λ_4j for
(Λ_j)_j ≥ 0 the ordered spectrum of (with multiplicities). We then define, for
0 ≤ a ≤ b, the counting function
N^_(X,ε)(a,b)
:= {j ≥ 0 : a ≤_j ≤ b}
= 1/4 #{eigenvalues of in [a,b] }.
§.§.§ Pathological examples of Dirac spectra
Let us now prove Proposition <ref>, which claims that there is no topological
bound on the number of eigenvalues in [0, η], provided g >0. The proof relies on the two
following results, proven by Bär in <cit.>.
Let X be a finite area hyperbolic surface. For any simple non-separating closed geodesic
γ on X, there exists two nontrivial spin structures ε_± on X such that
ε_±(γ) = ± 1.
We precise that, in the previous statement, the geodesic γ is simple if it has no
self-intersection, and non-separating if the surface X ∖γ obtained by cutting
X along γ is connected. In other words, this lemma tells us that, if γ is
non-separating, then we can pick the value of a nontrivial spin structure at γ freely.
This is a direct consequence of the discussion in <cit.>.
Let X be a finite area hyperbolic surface equipped with a nontrivial spin structure
ε. Let γ be a simple non-separating geodesic on X such that
ε(γ) = +1. Let (X_n)_n ≥ 1 be a sequence of finite area hyperbolic
surfaces obtained from X by pinching the geodesic γ so that its length goes to 0 as
n →∞. Then, for any η > 0,
N^_(X_n,ε)(0,η)
= - η/πlog (ℓ_X_n(γ)) + 𝒪_η(1).
The pinching procedure mentioned above is a classic way to construct pathological examples in
hyperbolic geometry, and described in more detail in <cit.>. Lemma <ref>
quantifies how the Dirac spectrum of X_n converges to the Dirac spectrum of the limit X_∞
of (X_n)_n as n →∞. There is an accumulation process because, on top of the
nontrivial cusps of X, X_∞ has two new cusps with a trivial spin structure (coming from
the pinched geodesic), and hence the Dirac spectrum of (X_∞,ε) is .
This is a trivial adaptation of <cit.> when X is a surface of finite area
equipped with a nontrivial spin structure, rather than a closed surface. No changes are required
in the proof.
We are now ready to prove Proposition <ref>.
Let X be an arbitrary hyperbolic surface of signature (g,k). The genus of X is nonzero,
and hence there exists a simple non-separating geodesic γ on X. By
Lemma <ref>, since γ is non-separating, there exists a nontrivial spin
structure ε on X such that ε(γ) = +1. Then, we define a sequence of
metrics (X_n)_n ≥ 1 as in Lemma <ref> by pinching the geodesic γ so that
its length goes to 0 as n → + ∞. By Lemma <ref>, since
ε(γ)=+1,
N^_(X_n,ε)(0,η)
= - η/πlog (ℓ_X_n(γ)) + 𝒪_η(1)
n → + ∞⟶ + ∞
for any fixed η >0. In particular we can pick a n such that
N^_(X_n,ε)(0,η) ≥ N. Then, X_n satisfies our claim.
§.§.§ The Selberg trace formula for Dirac operators
Our main tool to study the counting function N^_(X,ε)(a,b)
is the Selberg trace formula for the Dirac operator on compact hyperbolic
surfaces, developed by Bolte and Stiepan in <cit.> and generalised to
hyperbolic surfaces of finite area by the second author <cit.>. This formula relates the Dirac spectrum of a finite area hyperbolic
surface X to its length spectrum, i.e. the list of the lengths of all closed
geodesics on X, under the condition that the spin structure is nontrivial.
In this article, following the line of <cit.>, we will use the
following pretrace formula, adapted from <cit.>.
Consider X=\ a hyperbolic surface with k cusps,
equipped with a nontrivial spin structure ε. Let
(_j)_j∈ℕ denote the reduced spectrum of
.
Then, for any admissible test function h,
∑_j=0^∞ h(_j) = (X)/8π∫_
h() (π)- log(2)/2 k ȟ(0)
+ 1/2∑_≠ 1∫_ε() K(d(z, z)) τ_z↦^-1zẓ
where the sum is taken after all hyperbolic elements in and:
* the set is a fundamental domain of X = \;
* τ_z↦ w = - i z-w/|z-w|
is the parallel transport of spinors from z to w with respect to ∇;
* the kernel K can be expressed as:
K(r)
= -cosh( r2)/π√(2)∫_r^∞ȟ'(ρ)-
12 ȟ(ρ)
tanh( ρ2)/cosh( ρ2)
√(cosh (ρ) - cosh (r))ρ̣
where ȟ is the inverse Fourier transform of h, i.e.
ȟ(ρ) = 1/2π∫_h()e^iρ.
In the above statement, by admissible, we mean that there exists
η > 0 such that h is an even holomorphic function defined
on the strip {z = x+iy : |y| ≤ 12 + η}
which satisfies |h(z)| ≤ C (|z|^2+1)^-1-η for a
constant C>0, as in <cit.>.
In <cit.>, the second author proved that,
under the hypotheses of the theorem, for any
ϕ∈ C^∞_c(), if we set
w(x):=ϕ(x)/√(x+4),
and ȟ(x):=4cosh(x2)
∫_0^∞
w( 4 sinh^2 (x2)+y^2 )ỵ,
then (<ref>) holds with the kernel
K(r) := ϕ(4 sinh^2 (r/2)). As a consequence, in order to
conclude, all that we have to do is to associate a function ϕ
to our test function h, and hence express the kernel K in
terms of h. To do so, we shall consider the following operators
, acting on the set 𝒮([0,∞]) of Schwartz
functions on [0,∞]:
φ (x) :=∫_0^∞φ(x+y^2) ỵ,
ψ (x) :=-4/π∫_0^∞ψ '(x+y^2) ỵ.
By direct computations, one can easily see that ∘=1. Indeed:
(∘)ψ (x)
=-4/π∫_0^∞∫_0^∞ψ'(x+y^2+z^2)
ẓỵ
= -4/π∫_[0,π2]×[0,∞)ψ'(x+r^2)r θ̣ṛ
=-2∫_0^∞1/2∂ψ(x+r^2)/∂ rṛ = ψ(x).
Writing the identity in such a way allows us to compute the kernel
in terms of ȟ. On the one hand, by the expression of
ȟ in equation (<ref>), we get that:
w ∘(4 sinh^2( ·2))
= ȟ/4cosh( ·2)
thus, it follows that:
4sinh( ρ2)
cosh( ρ2)
( w)' (4 sinh^2( ρ2))
=
ȟ'(ρ) cosh( ρ/2)-1/2ȟ(ρ) sinh( ρ/2)/4cosh^2 ( ρ/2).
On the other hand, by definition of K and w, since √(4
sinh^2( r 2)+4)=2 cosh ( r 2),
K(r) = ϕ(4 sinh^2( r2 ))
= 2 cosh(r2)
w(4 sinh^2( r2 ))
= - 8/πcosh(r2)
∫_0^∞( w)' (
4 sinh^2( r2 )+y^2 )ỵ
because ∘ = 1. We then perform the change of variable
4 sinh^2 (ρ 2) = 4 sinh^2( r 2)+y^2 and obtain
the claimed expression thanks to (<ref>) and the fact
that
y = √(4sinh^2 (ρ 2) - 4sinh^2 ( r 2))
= √(2 (cosh(ρ) - cosh(r))).
§.§ Random hyperbolic surfaces
In this article, we study the properties of random hyperbolic surfaces sampled with the
Weil–Petersson probability measure. Let us provide the key elements that are necessary for the
reading of this article – thorough presentations of this probabilistic model are provided in
<cit.>.
Let g, k be integers such that 2g-2+k>0. Our sample space is the
moduli space
ℳ_g,k := {hyperbolic surfaces of signature (g,k) }╱isometries.
This space is an orbifold of dimension 6g-6+2k. Weil introduced in <cit.> a natural
symplectic structure on ℳ_g,k, called the Weil–Petersson form. It induces a
volume form of finite volume, which can be renormalised to obtain a probability measure
ℙ_g,k on the moduli space ℳ_g,k.
Our objective is to describe “typical behaviour”, i.e. we will focus on
proving properties true with probability going to one in a certain asymptotic
regime. More precisely, for our fixed sequence (k(g))_g ≥ 2, we will say a
property is true with high probability in the large genus limit if
lim_g → + ∞ℙ_g,k(g)(X ∈ℳ_g,k(g) satisfies the property) = 1.
The following result states two key geometric properties true with high
probability which we will use in this article.
Let (k(g))_g ≥ 2 be a sequence of non-negative integers such that
k(g) = o(√(g)) as g → + ∞. Then, for all g≥ 2,
there exists a subset 𝒜_g,k(g) of the moduli space
ℳ_g,k(g) of probability 1 - 𝒪(log(g) g^-1/12) such that
any surface X ∈𝒜_g, k(g) satisfies the following.
* If X^- (L) is the L-thin part of X, i.e. the set of points in X
with radius of injectivity shorter than L, then
( X^- ( 1/6log g) )/ (X)≤ g^-1/3.
* The systole of X (i.e. its shortest closed geodesic) is longer than
g^-1/24√(log g).
The first point was proven by the first author in <cit.>, and the second by Le Masson and Sahlsten in <cit.>.
§ PLAN OF THE PROOF AND FIRST ESTIMATES
In this section, we set up some notations in order to prove
Theorem <ref>, following the lines of <cit.>,
and prove first easy estimates.
§.§ The family of test functions
A key step of the proof is to construct a family of test functions such that the
spectral side of the Selberg trace formula is a good approximation of the
counting number N^_(X,ε) (a,b), for 0 ≤ a ≤ b. Our choice of test
function is a straightforward adaptation to the choice made in <cit.>.
For t>0, a parameter which will grow like √(log g), consider the family
of test functions h_t:⟶ defined by the convolution
h_t()
:= (1_[a,b]⋆ v_t)()
= t/√(π)∫_a^bexp(-t^2(-ρ)^2)ρ̣,
where 1_[a,b] is the indicator of the segment [a, b]
and v_t(x):=t/√(π)exp(-t^2x^2) is the Gaussian of
mean 0 and variance 1/t. One can easily see that h_t is
holomorphic. Since it is not even, we will rather apply Proposition
<ref> to the function H_t() := h_t()+h_t(-).
Let us present elementary properties of the functions h_t and g_t := Ȟ_t proven in
<cit.>, which will be useful to the proof of Theorem <ref>.
Let 0 ≤ a ≤ b and t > 0.
* The function H_t is admissible.
*
For any t>0, and for any x>0 we have:
|g_t(u)|
≤2/π uexp( -u^2/4t^2)
|g_t'(u)|
≤( 1/π t^2
+ 4b/π u)
exp( -u^2/4t^2).
* As t → + ∞, h_t converges to the function
↦1̃_[a,b]() which coincides with 1_[a,b]
except at = a and b where it is equal to 1/2. More precisely, for ∈,
|h_t()-1̃_[a,b]()|≤
s(t|-a|)
if ∈ (-∞, a) ∪{ b }
s(t|-a|)+s(t|-b|)
if ∈ (a,b)
s(t|-b|)
if ∈{ a }∪ ( b, ∞)
where s:(0,∞)⟶ is the non-increasing
function s(ρ):=exp(-ρ^2)/2√(π)ρ.
These three points are respectively Lemma 10, 13 and 21 from
<cit.>.
§.§ Plan of the proof
In order to prove our main result, we apply Theorem <ref> to the family of
functions H_t, of kernels K_t, and obtain that for any hyperbolic surface X of signature
(g,k),
1/ (X)∑_j=0^∞ H_t(_j)
=
1/8π∫_H_t() (π)-
k log(2)/2 (X) g_t(0)
+
1/2 (X)∑_≠ 1∫_ε() K_t(z, z) τ_z↦^-1zẓ.
The left hand side of this formula is an approximation of the ratio
N^_(X,ε)(a,b)/(X), which we wish to estimate. Thus, we
shall study the right hand side, term by term.
* In Section <ref>, we bound the difference between the integral term
I(t,a,b) :=1/8π∫_H_t() (π)
and the integral that appears in Theorem <ref>.
* Then, in Section <ref>, we prove an easy bound on the cuspidal term
C(X,t) := - k log(2)/2 (X) g_t(0).
* Section <ref> is dedicated to bounding the kernel term,
R_K(X,ε,t,a,b)
:=1/2 (X)∑_≠ 1∫_ε() K_t(z, z) τ_z↦^-1zẓ
which is the most difficult part of the analysis of the trace formula, where the probabilistic
assumption on X ∈𝒜_g,k(g) is necessary.
We then conclude to the proof of Theorem
<ref> in Section <ref>, where
we compare the left hand side of (<ref>) with the
rescaled number of -eigenvalues between a and b.
§.§ Asymptotic of the integral term
Let us prove the following result, which bounds the difference between the
integral I(t,a,b) and integral appearing in our claim.
For any t>0,
| I(t,a,b) - 1/4π∫_a^b (π)|
≤b + 1/2t + 1/2t^2.
We start by rewriting I(t,a,b) in a more convenient form. First we use the
parity of H_t, then we write H_t() = h_t()+h_t(-), to obtain
I(t,a,b)
=
1/8π∫_ H_t() (π)= 1/4π∫_ h_t() (π)
= 1/4π∫_( h_t() - 1̃_[a,b]() ) (π)+1/4π∫_a^b(π).
We shall use Lemma <ref>.(<ref>) to bound the difference
|h_t() - 1̃_[a,b]()| appearing in the equation above. Since the
function s from this bound has a pole at 0, we shall use different estimates near =a and
=b. Thus we write the real line as a union of (up to) five intervals:
=
( -∞,a-1/t]
∪[ a-1/t,a+1/t]
∪[ a+1/t,b-1/t]
∪[ b-1/t,b+1/t]
∪[ b+1/t,∞).
If a + 1/t > b -1/t then the interval in the middle
is omitted from the union. The integral splits accordingly into five parts,
denoted I_j, for 1≤ j ≤ 5:
|I(t,a,b) - 1/4π∫_a^b (π)|
≤1/4π∫_| h_t() -1̃_[a,b] () | (π):= 1/4π∑_j=1^5 I_j.
We start with I_1, I_3 and I_5. Throughout the computations, we will use the
fact that:
∀∈, 0 ≤(π) ≤ ||+1/π.
Equation <ref> and Lemma <ref>.(<ref>) for < a
together imply
I_1
≤∫_-∞^a-1/texp( -t^2(-a)^2 )/2√(π) t (a-)( ||+1/π)= ∫_1^∞e^-x^2/2√(π)x( | a - x/t| +1/π)x̣/t
by the change of variable x = t (a - ) ∈ (1, + ∞). Then,
I_1
≤1/2√(π)∫_1^∞ e^-x^2( a +1 + x/t)x̣/t≤a + 1 /4t + 1/4 t^2.
With similar approximations one can also prove that:
I_3
≤a+b +2/4t + 1/2t^2 and
I_5
≤b + 1/4t + 1/4t^2.
For I_2 and I_4 we rather use the loose bound
| h_t() -1̃_[a,b] () | ≤ 1, which
yields
I_2
≤∫_a -1/t^a +1/t( ||+1/π)≤∫_a -1/t^a +1/t( a +1 + 1/t) ≤2a +2/t + 2/t^2.
In the same way we also get I_4 ≤2b +2/t + 2/t^2.
Combining those bounds and using a ≤ b, we obtain
I_1+I_3+I_5
≤b + 1/t + 1/t^2 and
I_2+I_4
≤4b + 4/t + 4/t^2
which allows us to conclude using equation (<ref>) and 5/(4π) < 1/2.
§.§ Bond of the cusps contribution
Let us now prove the following.
For any X of signature (g,k) and any t > 0,
|C(X,t)| ≤k/2g-2+k (b-a).
We note that, in Theorem <ref>, we are placed in the
regime k = k(g) = o(√(g)). Then, Proposition <ref> implies
C(X,t)
= o ( b/√(g)).
By definition, the cusp term is
C(X,t) = - k log(2)/2 (X) g_t(0)
= - k log(2)/8π (2g-2+k) g_t(0)
by the Gauss–Bonnet theorem. We therefore simply have to estimate
g_t(0) = 1/2 π∫_ H_t(). We observe that
∫_h_t()=
t/√(π)∫_∫_a^bexp(-t^2(-ρ)^2 )ρ̣
=
t/√(π)∫_a^b∫_exp(-t^2x^2 )x̣ρ̣
=b - a.
Similarly we have that ∫_h_t(-)= b - a. Hence,
g_t(0)= (b- a) / π, which leads to the claim.
§ BOUND OF THE KERNEL TERM
In what follows we show a bound on the kernel term of the trace formula,
R_K(X,ε,t,a,b)
:=1/2 (X)∑_≠ 1∫_ε() K_t(z, z) τ_z↦^-1zẓ
where the summation runs over hyperbolic elements in the group which are
not the identity, and is a fundamental domain of X = \.
The steps of the kernel bound are as follows.
* First, in Section <ref>, we prove an upper bound on
the values K_t of the kernel appearing in R_K.
* We prove a classic counting bound on hyperbolic elements of in
Section <ref>, in order to deal with the summation.
* We then cut the fundamental domain in a thick and thin part,
^±(L), in Section <ref>. We bound the integrals over these two sets
separately.
* Finally, in Section <ref>, we conclude to a
quantitative probabilistic bound on R_K, using the probabilistic assumption
from Theorem <ref>, and in particular the Benjamini–Schramm hypothesis.
§.§ Kernel estimate
Let us prove the following bound on K_t.
For any ρ, t >0 we have:
|K_t(ρ)| ≤( 1/ t^2 + b + 1/ρ)
(1 + t/ρ)
exp(-ρ^2/4t^2).
We start from the kernel formula, equation (<ref>), and make use of the inequalities
on g_t and g_t' obtained in Lemma
<ref>.(<ref>). More precisely,
|K_t(ρ)|
= cosh( ρ2)/2π√(2)|∫_ρ^∞2 g_t'(u)-
g_t(u)
tanh( u2)/cosh( u2)
√(cosh (u) - cosh (ρ))ụ|
≤1/2π√(2)∫_ρ^∞2 |g_t'(u)| + |g_t(u)|/√(cosh (u) - cosh (ρ))ụ
by the triangle inequality. By Lemma <ref>.(<ref>),
2 |g_t'(u)| + |g_t(u)|
≤2/π( 1/t^2 +4b+1/u)
exp(- u^2/4t^2)
and hence
|K_t(ρ)|
≤1/π^2 √(2)( 1/ t^2 + 4b + 1/ρ)
∫_ρ^∞exp(-u^24t^2) /√(cosh u - coshρ)ụ.
We then proceed with the splitting of the integral at u=2ρ. If
u∈[ρ, 2ρ], then
cosh u - coshρ≥ (u-ρ)sinhρ≥ (u-ρ)ρ. Hence:
∫_ρ^2ρexp( -u^2/4t^2) /√(cosh u - coshρ)ụ≤exp(-ρ^2/4t^2)∫_ρ^2ρụ/√((u-ρ)ρ)
= 2 exp(-ρ^2/4t^2).
In the other case, if u∈ [2ρ, ∞), we can deduce that
cosh u - coshρ≥1/2(u^2-ρ ^2)≥3/2ρ^2. It
follows that:
∫_2ρ^∞exp( -u^24t^2) /√(cosh u - coshρ)ụ ≤√(2)/√(3) ρ∫_2ρ^∞exp(-u^2/4t^2) ụ
≤√(2)/√(3) ρ∫_0^∞exp(-u^2/4t^2 - ρ^2/t^2) ụ
= √(2 π) t/√(3)ρexp(- ρ^2/t^2).
Finally, putting everything together, we obtain:
|K_t(ρ)| ≤1/π^2 √(2)( 1/ t^2 + 4b + 1/ρ)
( 2 + √(2 π) t/√(3)ρ)
exp(-ρ^2/4t^2)
which implies our claim.
§.§ Bound on the number of hyperbolic elements
We shall use the following classic bound in order to control the summations over
hyperbolic elements of .
Let r ≤ 2 be a positive number and let X = \
be a hyperbolic surface whose systole is larger than 2r. Then,
for any j>0, any z ∈,
#{∈∖{1} : hyperbolic,
d(z, z) ≤ j }≤4e^1+j/r^2.
Choose z∈ a point. The family of disks centred at z
and of radius r/2, for hyperbolic elements ∈, are
disjoint. By comparing areas, the number of hyperbolic elements
for which d(z, z)≤ j must be smaller than:
cosh( j+ r/2)-1/cosh(r/2)-1≤e^j+r/2/r^2/4≤4e^1+j/r^2.
§.§ Thin-thick decomposition of the fundamental domain
For a positive real number L, we decompose the fundamental domain of
X = \ as a disjoint union of two sets ^-(L) and ^+(L), the points of
with injectivity radius smaller than L and larger than L respectively. Splitting the
integral in the sum R_K into two integrals, on those two sets, we can rewrite our sum as:
R_K(X,ε,t,a,b)=R^-_K(X,ε,t,a,b,L)+R^+_K(X,ε,t,a,b,L).
We shall start by bounding the contribution given by integration on ^+(L),
using the fact that all points on ^+(L) have an injectivity radius larger
than L.
Let t > 0 and 0 < r ≤ 2. Suppose that X=∖ is a
hyperbolic surface whose systole is larger than 2r. If L is a real number
such that L ≥ 8t^2, then:
|R^+_K(X,ε,t,a,b,L)|
≤4e/r^2( 1/ t^2 + b + 1/r)
(1 + t/r)
exp( - L ).
By definition of ^+(L), for z∈^+(L), the sum defining
R^+_K contains no elements such that d(z, z) < L. Thus,
we can write:
R^+_K(X,ε,t,a,b,L)
=
1/2(X)∫_^+(L)∑_j= ⌊ L ⌋^∞∑_≠ 1
j≤ d(z, z)<j+1ε() K_t(z, z) τ_z↦^-1zẓ.
When bounding the quantity above by the triangular inequality, we note that
|ε()|=|τ_z↦^-1z|=1, which allows to
safely ignore these terms. Moreover, notice that the distance between z and
z is always larger than r, the injectivity radius. Thus,
Proposition <ref> together with Lemma <ref>
imply:
|R^+_K(X,ε,t,a,b,L)|
≤2e/r^2( 1/ t^2 + b + 1/r)
(1 + t/r)
(^+(L))/ (X)∑_j= ⌊ L ⌋^∞exp( j - j^2/4t^2).
First, we note that (^+(L)) ≤(X). We then observe that,
provided L≥ 8t^2, we have j≤j^2/8t^2 and hence by
comparison of the sum with an integral,
∑_j= ⌊ L ⌋^∞exp( j - j^2/4t^2)
≤∑_j= ⌊ L ⌋^∞exp( -j^2/8t^2)
≤exp(- L^2/8t^2)
+ ∫_L^∞exp( - x^2/8t^2)x̣
≤exp(- L^2/8t^2)
+ 1/L∫_L^∞xexp( - x^2/8t^2)x̣
= ( 1+ 4t^2/L)exp( - L^2/8t^2)
which is bounded by 2exp(-L) as soon as L ≥ 8 t^2, thus implying our claim.
We now bound the contribution of ^-(L).
With the notations of Lemma <ref>,
|R^-_K(X,ε,t,a,b,L)|
≤4 e/r^2( 1/ t^2 + b + 1/r)
(1 + t/r)
(^-(L))/ (X)(1+ L e^L ).
We observe that, due to the fact that the injectivity radius on ^-(L) is not
bounded below by L ≫ 1, we do not obtain an exponential decay like e^-L
in Lemma <ref>. However, the ratio (^-(L)) / (X)
will decay under the Benjamini–Schramm hypothesis.
As before, we combine Proposition <ref> together with Lemma
<ref> to obtain:
|R^-_K(X,ε,t,a,b,L)|
≤2e/r^2( 1/ t^2 + b + 1/r)
(1 + t/r)
(^-(L))/ (X)∑_j≥ 0exp( j - j^2/4t^2).
To deal with the last sum, we split it at ⌊ 8t^2 ⌋
+1. Proceeding as in Lemma <ref>, we deduce:
∑_j≥⌊ 8t^2 ⌋ +1exp( j - j^2/4t^2)
≤( 1+ 4t^2/⌊ 8t^2 ⌋ +1)
exp( - (⌊ 8t^2 ⌋ +1)^2/8t^2) ≤ 2.
For remaining indices we just bound naively:
∑_j=0^⌊ 8t^2 ⌋exp( j - j^2/4t^2)
≤∑_j=0^⌊ 8t^2 ⌋exp( j )
≤
8t^2exp(8t^2)
≤
L exp(L)
which leads to the claim.
§.§ Probabilistic kernel estimate
The last step of this section is to use our probabilistic hypotheses, presented
in Section <ref>, to bound the kernel term. We prove the following.
Let 0≤ a ≤ b, g ≥ 2, and set
t := 1/4√(3)√(log g). Then for any
X∈𝒜_g,k(g),
R_K(X,ε,t,a,b) = 𝒪(b+1/√(log g )).
Let us apply Lemmas <ref> and <ref> with the
parameters
t := 1/4√(3)√(log g)
L := 8 t^2 = 1/6log(g)
r := √(log g)/2 g^1/24
to a surface X ∈𝒜_g,k(g). By definition of the set
𝒜_g,k(g), the systole of X is bounded below by 2r.
We observe that, for our choices of parameters,
1/r^2( 1/ t^2 + b + 1/r)
(1 + t/r)
= 𝒪((b+1)t/r^4)
= 𝒪(b+1/g^- 1/6 (log
g)^3/2).
As a consequence, Lemma <ref> implies
R^+_K(X,ε,t,a,b,L)
= 𝒪(
b+1/g^- 1/6(log g)^3/2 e^-L)
= 𝒪(
b+1/(log g)^3/2).
Furthermore, by definition of 𝒜_g,k(g),
(^-(L))/(X)≤ g^- 1/3.
Lemma <ref> then implies
R^-_K(X,ε,t,a,b,L)
= 𝒪(
b+1/g^- 1/6 (log g)^3/2 g^- 1/3
L e^L)
= 𝒪( b+1/√(log g))
which allows us to conclude because R_K = R_K^+ + R_K^-.
§ ESTIMATES FOR THE NUMBER OF EIGENVALUES
We are finally able to prove the main theorem. Throughout this section, we will
take the parameter t to be equal to √(log g)/4√(3), for a
large genus g. Combining results from the last three subsections we obtain the
following:
Let g ≥ 2, t = √(log g)/4√(3), 0≤ a ≤ b. For any
X∈𝒜_g,k(g), any nontrivial spin structure ε on X,
1/ (X)∑_j=0^∞ (h_t(_j)+ h_t(-_j))
=
1/4π∫_a^b (π)+ 𝒪(b+1/√(log g)).
Rewrite formula (<ref>) using Proposition
<ref>, <ref> and Remark
<ref>.
From this we easily deduce Proposition <ref>, the upper bound on the
counting function N^_(X,ε) (a,b) defined in Section <ref>.
First, we use the bound 0 ≤ (π) ≤ + 1/π for >0 to obtain
∫_a^b (π)≤∫_a^b ( + 1/π)= 𝒪( b^2-a^2 + b-a)
= 𝒪( (b+1) (b-a))
because b^2-a^2 = (b-a)(b+a) and a ≤ b.
Then, Lemma <ref> implies
1/ (X)∑_j=0^∞ (h_t(_j)+ h_t(-_j))
=
𝒪( (b+1) ( b-a + 1/√(log g)) ).
We then observe that
N^_(X,ε) (a,b) inf_∈ [a,b] h_t()
≤∑_j=0^∞ (h_t(_j)+ h_t(-_j))
by positivity of h_t, and because N^_(X,ε) counts the
number of indices j such that a ≤_j ≤ b by definition.
Moreover, the restriction of the function h_t to [a,b] attains its infimum
at both endpoints a and b. Then, we consider the two following regimes.
* If t(b - a)≥ 1, by Lemma <ref>.(<ref>) for either one
of these two values,
inf_∈ [a,b] h_t()
≥1/2-exp(-t^2(b-a)^2)/2√(π) t(b-a)≥1/2 - e^-1/2√(π)
>
1/3.
Then equations (<ref>), (<ref>) and
(<ref>) together imply our claim.
* If t(b-a)≤ 1, then we note that b ≤ a + 1/t and hence
N^_(X,ε) (a,b)
≤
N^_(X,ε)( a, a + 1/t).
We apply the first case to the parameters a and b':= a+1/t, and obtain
N^_(X,ε) (a,a + 1/t)/ (X)
= 𝒪( (a + 1/t+1 )
(1/t + 1/√(log g)) )
= 𝒪( a+1/√(log g)),
which is enough to conclude because a ≤ b.
We are now ready to conclude to the proof of our main result, Theorem <ref>.
We prove the upper and lower bounds separately, because they rely on
a different method.
First, we note that if t(b - a)< √(2e) then
∫_a^b (π)= 𝒪((b+1)(b - a))
= 𝒪( b +1 /√(log g)),
and hence the upper bound is then a trivial consequence of
Proposition <ref>. Thus, for the rest of the proof, we shall assume that
t(b - a) ≥√(2e). The control of the function h_t given by Lemma
<ref>.(<ref>) is not optimal near a and b. Therefore we
decompose the counting function as:
N^_(X,ε) (a,b)
=
N^_(X,ε)(a , a + η)
+
N^_(X,ε)(a + η, b - η)
+
N^_(X,ε)(b - η, b)
for a number η∈ [1/t, b-a/2] that will be
picked later (note that the hypothesis t(b - a) ≥√(2e)
implies that this interval is not empty).
On the one hand, we observe that the first and the third term on
the right hand side of (<ref>) can easily be
bounded above using Proposition <ref>:
N^_(X,ε)(a , a + η)
+ N^_(X,ε)(b - η, b)
/ (X)
=
𝒪( (b+1) (η + 1/√(log g)) )
=
𝒪( (b +1)η).
On the other hand, we can proceed as in the proof of Proposition
<ref> to bound the second term of the
right hand side, except more finely this time. More precisely, we
write again
N^_(X, ε)(a + η, b - η)/(X)inf_∈ [a + η , b - η] h_t() ≤∑_j=0^∞( h_t(_j)+h_t(-_j))
and then use Lemma <ref> to obtain a constant
C>0 such that
N^_(X, ε)(a + η, b - η)/(X)inf_∈ [a + η , b - η] h_t()
≤1/4π∫_a^b (π)+ C b +
1/√(log g).
Let us estimate the infimum of h_t on [a+η, b-η]. This infimum is attained at both
endpoints a+η and b-η. Using Lemma <ref>.(<ref>) we
obtain:
inf_∈ [a + η , b - η] h_t()
≥
1 - e^-t^2η^2/2√(π) t η
- e^-t^2 (b - a -η)^2/2 √(π) t (b - a -η)≥
1 - e^-t^2η^2/√(π)tη
because b - a - η≤η. We now observe that, for all
0 ≤ x ≤ 1/2, (1-x)^-1≤ 1+2x. We apply this inequality to
x = e^-t^2η^2/(√(π)tη) ≤ 1/2 (thanks to
the fact that tη≥ 1) and get:
( inf_∈ [a + η , b - η] h_t() )^-1≤ 1 + 2 e^-t^2η^2/√(π)tη≤ 1 + 2 e^-t^2η^2.
We now use the bound (<ref>) into equation
(<ref>), which yields
N^_(X, ε)(a + η, b - η)/(X) ≤ (1 + 2 e^-t^2η^2)
( 1/4π∫_a^b (π)+ C b +
1/√(log g))
≤1/4π∫_a^b (π)+ e^-t^2η^2∫_a^b (π)+ 3 C b + 1/√(log g).
By direct computations, one can check that the number
η := 1/t√(log( √(e/2) t(b - a) ))
satisfies the hypothesis 1/t≤η≤b-a/2 thanks to the
assumption t(b - a)≥√(2e). Then,
e^-t^2η^2∫_a^b (π)= 𝒪( e^-t^2η^2 (b+1)(b-a) )
= 𝒪( b+1/t)
= 𝒪( b+1/√(log(g)))
and therefore
N^_(X, ε)(a + η, b - η)/(X)≤1/4π∫_a^b (π)+ C' b+1/√(log g)
for a constant C'. We can then conclude using the
decomposition (<ref>), the bounds
(<ref>) and (<ref>) with
our value of η specified in (<ref>).
Since 0≤ h_t≤ 1 everywhere one has:
N^_(X,ε)(a,b)
≥∑_a ≤_j ≤ b h_t(_j)
which we can rewrite as
N^_(X,ε)(a,b)
≥∑_j=0^∞( h_t(_j) +h_t(-_j))
- ( ∑__j < a h_t(_j)
+ ∑__j> b h_t(_j)
+ ∑_j=0^∞ h_t(-_j)).
By Lemma <ref>, there exists C”>0 such that:
1/ (X)∑_j=0^∞( h_t(_j) +h_t(-_j))
≥1/4π∫_a^b (π)-
C”b + 1/√(log g).
It therefore suffices to prove that the three sums we subtract are
𝒪( b+1/√(log g)) to conclude.
We shall only present the proof for the sum after eigenvalues
larger than b, because one can treat the other cases
similarly. For a non-negative integer k denote
b_k := b + k/t. Then,
1/ (X)∑__j> b h_t(_j)
= 1/ (X)∑_k=0^∞∑_b_k ≤_j < b_k+1 h_t(_j)
≤∑_k=0^∞N^_(X,ε)(b_k,b_k+1)/ (X)sup_∈ [b_k,b_k+1] h_t().
For k ≥ 0, since b_k+1-b_k = 1/t = 𝒪(1/√(log(g))),
Proposition <ref> implies that
N^_(X,ε)(b_k,b_k+1)/ (X)
= 𝒪(
b_k+1+1/√(log(g)))
= 𝒪(
b+1/√(log(g))
(k+1)).
We then apply Lemma <ref>.(<ref>) to bound the supremum of h_t
on [b_k, b_k+1] for the terms k ≥ 1, and obtain that
∑_k=0^∞
(k+1) sup_∈ [b_k,b_k+1] h_t()
≤ 1 + ∑_k=1^∞
(k+1) exp(-t^2(b_k - b)^2)/2√(π) t (b_k - b)
≤ 1 + ∑_k=1^∞
(k+1) exp(-k^2)/2√(π) k = 𝒪(1)
which is what we need to conclude.
plain
|
http://arxiv.org/abs/2307.02645v2
|
20230705202518
|
Cocharge and skewing formulas for $Δ$-Springer modules and the Delta Conjecture
|
[
"Maria Gillespie",
"Sean T. Griffin"
] |
math.CO
|
[
"math.CO",
"math.AG"
] |
(Semi)automated disambiguation of scholarly repositories
[
August 1, 2023
========================================================
We prove that ωΔ'_e_ke_n|_t=0, the symmetric function in the Delta Conjecture at t=0, is a skewing operator applied to a Hall-Littlewood polynomial, and generalize this formula to the Frobenius series of all Δ-Springer modules. We use this to give an explicit Schur expansion in terms of the Lascoux-Schützenberger cocharge statistic on a new combinatorial object that we call a battery-powered tableau. Our proof is geometric, and shows that the Δ-Springer varieties of Levinson, Woo, and the second author are generalized Springer fibers coming from the partial resolutions of the nilpotent cone due to Borho and MacPherson.
We also give alternative combinatorial proofs of our Schur expansion for several special cases, and give conjectural skewing formulas for the t and t^2 coefficients of ωΔ'_e_ke_n.
§ INTRODUCTION AND MAIN RESULTS
Schur positivity is a central focus of algebraic combinatorics. One famous example is the Macdonald Positivity Conjecture, proven by Haiman <cit.>, which states that the symmetric Macdonald polynomials H_μ(x;q,t) expand in the Schur basis with positive coefficients in ℤ_+[q,t].
The proof uses the geometry of the Hilbert scheme Hilb_n(^2) of arrangements of n points in the plane ^2, and no direct combinatorial proof or explicit formula is yet known.
The Delta Conjecture <cit.>, which generalizes the recently-proven Shuffle Theorem <cit.>, motivates a major current area of research in symmetric function theory (e.g. <cit.>, <cit.>,<cit.>, <cit.>). It states two combinatorial formulas, in terms of parking functions, for Δ'_e_k-1e_n where Δ'_f is a particular eigenoperator of the Macdonald polynomials defined for any symmetric function f. One of the two Delta Conjecture formulas has been proven in <cit.>.
The Shuffle Theorem concerns the special case when k=n, in which Δ'_e_n-1 e_n is the bi-graded Frobenius series (in q,t) of the diagonal coinvariant ring
DR_n=ℚ[x_1,…,x_n,y_1,…,y_n]/I_n
where I_n is generated by the S_n-invariants with no constant term under the diagonal action of S_n permuting the x's and y's simultaneously. While the Shuffle Theorem gives a monomial expansion for Δ'_e_n-1e_n, an explicit formula for the Schur expansion is not known (and similarly for the Delta Conjecture).
In particular, the decomposition of a graded S_n-module R=⊕_d R_d into irreducibles can be described by its graded Frobenius character
(R):=∑_d (R_d)q^d
where R_d is the d-th graded piece and is the additive map on representations that sends the irreducible S_n-module V_ν to the Schur function s_ν. For a bi-graded module, we use two parameters q,t and obtain a bi-variate generating series. This means that determining the Schur expansion for Macdonald polynomials, the Shuffle theorem polynomials, or those of the Delta Conjecture would lead to a deeper understanding of the S_n-representation theory of the associated (bi-)graded modules.
In the one-parameter case, setting t=0 often leads to more tractable problems. For instance, a famous result of Lascoux and Schützenberger was their discovery of the cocharge statistic on Young tableaux to give a combinatorial formula for the Schur expansion of the (modified) Hall-Littlewood polynomials H_μ(x;q), which are the t=0 specialization of the Macdonald polynomials. The polynomials H_μ(x;q) are the graded Frobenius character of the Garsia-Procesi modules R_μ. These S_n-modules in turn are the cohomology rings of Springer fibers _μ. The cocharge statistic therefore resolved the natural question of how R_μ decomposes into irreducible S_n-modules.
In particular, for a partition μ, define (μ) to be the set of all (straight shape) semistandard Young tableaux of content μ, meaning that the tableau entries consist of μ_i copies of i for each i, and the entries are weakly increasing across rows and strictly increasing up columns in French notation (as in the “device” part of the tableau at left in Figure <ref>). Lascoux and Schützenberger showed that
(R_μ)=H_μ(x;q)=∑_T∈(μ) q^(T) s_(T)=∑_νK_ν,μ(q)s_ν
where (T) is the shape of the tableau T, that is, the partition whose i-th part is the length of the i-th row of T from the bottom, and s_(T) is the corresponding Schur function. Above, K_ν,μ(q) is the q-Kostka polynomial, and is the cocharge statistic as defined in Section <ref>.
One of the main results of this article generalizes the Lascoux–Schützenberger formula to the cohomology rings of the Δ-Springer varieties, which were recently introduced by Levinson, Woo and the second author <cit.>. These graded S_n-modules are denoted by R_n,λ,s and simultaneously generalize both the Garsia-Procesi modules R_μ and the generalized coinvariant rings R_n,k that were defined by Haglund, Rhoades, and Shimozono <cit.> to give an algebraic realization of the Delta Conjecture polynomial Δ'_e_k-1e_n at t=0. We obtain this result by connecting the Δ-Springer varieties to the theory of partial resolutions of nilpotent varieties due to Borho and MacPherson <cit.>.
The rings R_n,λ,s, first introduced in <cit.>, are defined for integers n,s and a partition λ with |λ|=k≤ n and s≥ℓ(λ). In the special case when n=|μ|, the ring R_n,μ,s coincides with R_μ. When λ = (1^k) and s=k, the ring R_n,λ,s coincides with R_n,k. Because the common generalization R_n,λ,s has a geometric interpretation as the cohomology rings of the Δ-Springer varieties
Y_n,,s <cit.>, we refer to them here as the Δ-Springer modules.
§.§ New skewing, charge and cocharge formulas for
We prove that the graded Frobenius character H_n,,s:=(R_n,,s) has the following skewing formula.
Let Λ = ((n-k)^s) + λ, where addition is computed coordinate-wise. We have
H_n,λ,s(x;q) = s_((n-k)^s-1)^⊥H_Λ(x;q)/q^s-12(n-k).
In the above statement, s_ν^⊥ denotes the adjoint operator to multiplication by s_ν with respect to the Hall inner product on symmetric functions.
The proof of Theorem <ref> relies heavily on the work of Borho and MacPherson on partial resolutions of the nilpotent cone. We show that the Δ-Springer varieties Y_n,,s are instances of the family of varieties studied in their work <cit.>. We prove a rational smoothness condition that enables us to use a result in <cit.> derived using the theory of perverse sheaves to obtain the Frobenius character.
As an immediate corollary, we have the following simple formula for the symmetric function in Delta Conjecture at t=0. We write for the operation of reversing the coefficients of the q polynomial, by setting q→ q^-1 and multiplying by q^d where d is the degree. We also write H_μ(x;q) = (H_μ(x;q)) for the (transformed) Hall-Littlewood symmetric functions.
In the R_n,k case, we have
(R_n,k) = ω∘(Δ'_e_k-1e_n|_t=0) = s_(n-k)^k-1^⊥H_((n-k+1)^k)(x;q)/q^k-12(n-k).
Equivalently,
ωΔ'_e_k-1e_n|_t=0 = s_((n-k)^k-1)^⊥ H_((n-k+1)^k)(x;q).
We now provide a combinatorial Schur expansion for (R_n,,s) that generalizes Equation (<ref>). We first make more rigorous the definition of the partition Λ mentioned above.
For a fixed n,λ,s with k=|λ|≤ s, define Λ_n,,s to be the partition formed by adding an s× (n-k) rectangle at the left of the diagram of λ. In other words Λ_n,,s=(n-k+λ_1,n-k+λ_2,…,n-k+λ_r,n-k,…,n-k) where there are s parts in total. As an example, for n=5, =(2,1), s=4, we have Λ_n,,s=(5,4,3,3).
A battery-powered tableau of parameters n,,s consists of a pair T=(D,B) of semistandard Young tableaux, where B is rectangular of shape (s-1)× (n-k), and the total content of D and B is Λ_n,,s. We call D the device of T and B the battery. We define the shape of T to be the shape of its device, that is, ^+(T)=(D).
We write 𝒯^+(n,,s) to denote the set of all battery-powered tableaux of parameters n,,s. For T∈𝒯^+(n,,s), we write (T) and (T), respectively to denote the cocharge and charge of the word formed by concatenating the reading words of D and B in that order (see Section <ref>).
We will usually draw the battery down-and-right from the device, as in Figure <ref>, so that the device and the battery together form a skew tableau (that is, a tableau of shape θ/ρ, where θ/ρ is formed by deleting the diagram of a partition ρ from a larger partition θ). We write this tableau as T=(D, B).
We prove the following formula for the graded Frobenius character of R_n,λ,s, which was originally conjectured in <cit.>.
We have
H_n,,s(x;q):=(R_n,,s)=1/q^s-12(n-k)∑_T∈𝒯^+(n,,s) q^(T)s_^+(T)(x).
We think of the battery as storing extra charge for the device. The q-exponent s-12(n-k) is the largest amount of cocharge that may be stored in the battery.
Suppose n=9, =(3,2,1,1), and s=4. Then Λ_n,,s=(5,4,3,3) and an example of a battery-powered tableau is shown in Figure <ref>. Its cocharge is 12 and shape is (6,2,1), and the normalization factor in Theorem <ref> is q^-32· 2=q^-6, so one of the terms of the summation above is q^-6· q^12s_(6,2,1)=q^6s_(6,2,1).
In order to prove Theorem <ref> from Theorem <ref>, we apply the operator s_((n-k)^s-1)^⊥ directly to Equation (<ref>), and in the process, we also obtain the following formula (in the Delta Conjecture case) in terms of Littlewood-Richardson coefficients and q-Kostka polynomials.
We have
⟨ s_μ, ωΔ'_e_k-1e_n|_t=0) = ∑_ν⊢ k(n-k+1) c_μ,((n-k)^k-1)^ν K_ν,((n-k+1)^k)(q).
By applying to Theorem <ref>, we can obtain the following alternative simpler expansion in terms of the generalized charge statistic.
We have
(H_n,,s)=((R_n,,s))=∑_T∈𝒯^+(n,,s) q^(T)s_^+(T)(x).
Specializing to the case relevant to the Delta Conjecture, =(1^k) and s=k, we have a new Schur expansion for the expression in the Delta Conjecture at t=0.
We have
Δ'_e_k-1e_n|_t=0 = ∑_T∈𝒯^+(n,(1^k),k) q^(T)s_^+(T)^*(x),
where ^+(T)^* is the transpose of the partition ^+(T).
Since the proof of Theorems <ref> and <ref> that we present here is essentially geometric in nature, it is also of interest to find a more direct combinatorial proof, using the existing expansions of (R_n,,s) in terms of monomials or sums of Hall-Littlewood polynomials. The following theorem summarizes some of our progress towards a combinatorial proof.
There is a direct combinatorial proof of Theorem <ref> for:
* s=2 and any n, (see Section <ref>),
* The coefficient of s_(n) in the t=0 Delta conjecture case (see Section <ref>).
This proposition was stated without full proof details in the conference proceedings article <cit.>, and we provide the complete proofs in this paper. In the companion paper <cit.> to this work, the authors will provide combinatorial proofs of two additional special cases using a new formula in terms of creation operators and the Loehr-Warrington algorithms on abaci.
§.§ Outline
After establishing background definitions and notation in Section <ref>, we prove Theorem <ref> in Section <ref>. We then prove Theorem <ref> and Theorem <ref> in Section <ref> and check that the highest degree terms agree with what we would expect. In Section <ref>, we give a combinatorial proof of Theorem <ref> at s=2, and in Section <ref>, we prove it for the s_(n) coefficient in the Delta conjecture case. In Section <ref>, we give conjectural formulas for the Delta Conjecture symmetric function for t degree at most 2 in terms of skewing sums of Hall-Littlewood polynomials. Finally, in Section <ref>, we outline potential future research directions.
§.§ Acknowledgments
We thank Brendon Rhoades for inspiring conversations at the start of this work, and Jim Haglund for helpful feedback after a talk on this material. We also thank William Graham and Amber Russell for helpful conversations on partial resolutions.
§ BACKGROUND
We now recall some background and definitions on tableaux operations, cocharge and charge, and geometry related to the Δ-Springer varieties. We refer to <cit.> for the definition of the basic operation of jeu de taquin rectification on skew semistandard Young tableaux.
§.§ Tableaux and insertion
We write partitions λ=(λ_1,…,λ_r) with their parts nonincreasing: λ_1≥λ_2≥⋯≥λ_r and write r=ℓ(λ) for the length of λ. We draw them in French notation, with λ_i boxes in the i-th row from the bottom, and use the shorthand (a^b)=(a,a,a,…,a) to denote the b× a rectangular partition with b parts of size a. A semistandard Young tableau (SSYT) of shape λ is a filling of the boxes of λ that weakly increases across rows and strictly increases up columns. As stated in the introduction, we write (μ) for the set of semistandard Young tableaux of content μ (and any shape).
The reading word of a tableau is the word formed by concatenating the rows from top to bottom. For instance, the reading word of the battery-powered tableau in Figure <ref> is
433111222442311.
The RSK insertion or row bumping of a letter i into a tableau T is the tableau T' formed by inserting i into the bottom row R_1 of T, where it is placed at the end if i is greater than or equal to every element of R_1 and otherwise it replaces the leftmost entry m of R_1 that is greater than i. Then m is inserted into the second row R_2 in the same manner, and so on until the process is complete and a new entry is added. RSK insertion is reversible given the final bumped entry <cit.>, and we call the reverse process unbumping.
We also say the RSK insertion of a tableau B into a tableau D (such as in the case of a battery B and device D) is the tableau T' formed by inserting the letters of the reading word of B one at a time into D. We write T'=D· B. It is well-known (see <cit.>) that D· B is equal to the jeu de taquin rectification of the skew tableau formed by placing B down-and-right of D. We use this equivalence implicitly in this paper.
Two words are Knuth equivalent if their RSK insertions (one letter at a time inserted into the empty tableau from left to right) are equal.
A horizontal strip is a skew shape in which no two boxes appear in the same column. It is known that RSK inserting a nondecreasing sequence into a tableau T extends the shape of T by a horizontal strip.
§.§ Tableau switching
Given partitions μ and λ, we write μ⊆λ and say μ is contained in λ if μ_i≤λ_i for all i, or equivalently if the partition diagram of μ is contained in the partition diagram of λ.
If we have containment of partition diagrams μ⊆λ⊆ν, we say the skew shape ν/λ extends the skew shape λ/μ.
We similarly say an SSYT T of shape ν/λ extends an SSYT S of shape λ/μ. If T extends S, we can switch T and S past each other using either of the following algorithms:
* Perform inner jeu de taquin slides on T into the cells of S as ordered from largest to smallest (with ties broken in reading order), and labeling each vacated cell after each slide with the corresponding element of S.
* Perform outer jeu de taquin slides on S into the cells of T as ordered from smallest to largest (with ties broken in reading order), labeling each vacated cell after each slide with the corresponding element of T.
The output of switching, using either algorithm, is a pair (T',S') of tableaux such that S' extends T', the content of T (resp., S) equals that of T' (resp., S'), and the total shape of T'∪ S' equals that of S∪ T. The equivalence of the two algorithms, and other facts on tableaux switching, can be found in Haiman's foundational paper on dual equivalence <cit.>.
§.§ Symmetric functions
We work in the ring of symmetric functions over ℚ in the countably infinite set of variables x_1,x_2,x_3,…, which we often simply abbreviate as x. We refer to <cit.> for the definitions of the Schur functions s_λ(x) and the elementary symmetric functions e_λ(x).
We recall that the Hall inner product is the symmetric inner product ⟨,⟩ on the space of symmetric functions for which ⟨ s_,s_μ⟩ =δ_λμ. We write f^⊥ for the adjoint operator to multiplication by f with respect to the Hall inner product; that is,
⟨ f^⊥( g),h⟩ =⟨ g,f· h ⟩.
It is known that s_μ^⊥ s_ν=s_ν/μ. We now observe a representation theoretic meaning of the operator s_μ^⊥ (our statement can essentially be found in different language in <cit.>, and we include details and proof here for completeness). In the below statement, the V_μ-isotypic component of an S_n-module W is the sum of all copies of the irreducible Specht module V_μ in the decomposition of W into irreducibles.
Given W an S_n-module, S_n-m× S_m a Young subgroup, and a partition μ⊢ m, then
s_μ^⊥(W) = 1/(V_μ)(W^V_μ)
where W^V_μ is the V_μ-isotypic component of the restriction of W to an S_m-module, whose Frobenius character is taken as an S_n-m-module.
By linearity, it suffices to check the lemma for W = V_ν where ν⊢ n. In this case,
Res^S_n_S_n-m× S_m(V_ν) = ⊕_λ⊢ m V_ν/λ⊗ V_λ
where V_ν/λ is the skew Specht module corresponding to ν/λ. Then the V_μ-isotypic component of V_ν is (V_ν/μ)^⊕(V_μ).
The formula follows since s_μ^⊥(V_ν) = s_μ^⊥ s_ν = s_ν/μ.
Also recall the omega involution on symmetric functions which may be defined as the unique linear operator ω such that ω(s_λ) = s_λ^*, where λ^* is the conjugate partition of λ.
Given a symmetric function f(x;q) with coefficients in ℚ[q], we have the q-reversal operator which reverses the coefficients of f as a polynomial in q. Precisely, if f(x;q) has q degree d as a polynomial in q with symmetric function coefficients, then (f(x;q)) = q^d f(x;1/q).
§.§ Charge and cocharge
We first define cocharge on words, using the reading word of the tableau T in Figure <ref> as a running example:
4_13_13_11_11_11_12_12_12_14_14_12_13_11_11.
The first cocharge subword is formed by searching right to left in the reading word for a 1, then continuing from that position to search for a 2 (wrapping around the end cyclically if necessary), and so on until we have reached the largest letter of the word:
4_13_13_11_11_11_12_12_12_14_14_12_13_11_11.
The cocharge labeling of a permutation is computed by searching right to left cyclically as before, labeling the entries 1,2,3,… in order, and starting by labeling the 1 with a 0 and incrementing the label if and only if the next entry is to the left of the previous:
4_3
3_13_2
1_1
1_1
1_1
2_1
2_1
2_1
4_1
4_12_1
3_1
1_11_0.
We then similarly find and label the second cocharge subword among the unlabeled letters:
4_3
3_2
3_2
1_1
1_1
1_1
2_1
2_12_1
4_14_2
2_1
3_1 1_0
1_0.
We iterate this process again on the unlabeled letters:
4_3
3_2
3_2
1_1
1_11_0
2_12_0
2_14_1
4_2
2_1
3_0
1_0
1_0.
We have now used all the 3's and 4's, and the final two cocharge subwords only have length 2 and 1, and we label them accordingly:
We continue to iterate this process on the unlabeled letters until all have been labeled:
4_3
3_2
3_2
1_0
1_0
1_0
2_0
2_0
2_1
4_1
4_2
2_1
3_0
1_0
1_0.
In Figure <ref>, the cocharge labels on the reading word elements are shown in the corresponding squares at right. The charge labels are placed in the same order as cocharge labels except we increment when the next element is to the right of the previous.
The cocharge (resp. charge) of T, written (T) and (T) respectively, is the sum of the cocharge (resp. charge) labels of its reading word.
Therefore, the cocharge of the word above is 3+2+2+1+1+2+1=12.
Cocharge and charge are invariant under bumping: we have (D· B)=(T') and (D· B)=(T') where T' is the insertion of B into D. This is because RSK insertion preserves the Knuth equivalence class of the reading word <cit.>, and cocharge and charge are invariant under Knuth equivalence <cit.>.
The maximum possible cocharge of a semistandard Young tableau of a given content ν occurs in the unique such tableau that has shape ν as well. In this case, the cocharge label of each of the ν_i entries in the i-th row is i-1. This leads to the following definition, which we use frequently throughout.
We define the partition statistic
𝐧()=∑_i (i-1)_i.
§.§ Hall-Littlewood polynomials
We recall the Hall-Littlewood polynomials, which are symmetric functions with coefficients in a parameter q.
Given a partition μ of n, the transformed Hall-Littlewood polynomial H_μ(x;q) is the symmetric function with Schur expansion given by the charge statistic,
H_μ(x;q) = ∑_T∈(μ) q^(T) s_(T).
Alternatively, applying the operator we get the modified Hall-Littlewood polynomial, with Schur expansion given by the cocharge statistic,
H_μ(x;q)= (H_μ(x;q)) = ∑_T∈(μ) q^(T) s_(T).
As mentioned in the introduction, the modified Hall-Littlewood polynomial H_μ(x;q) is the graded Frobenius character of R_μ, the cohomology ring of the Springer fiber _μ, which we define in the next subsection.
§.§ Springer fibers and -Springer varieties
Let G = GL_K(), let B be the Borel subgroup of invertible upper triangular matrices, and let (K) = G/B be the complete flag variety, which may be identified with the space of complete flags (K) = {F_∙ = (F_1⊂ F_2⊂⋯⊂ F_K)|(F_i) = i, F_K = ^K}. Let be the nilpotent cone of K× K nilpotent matrices.
The group G acts on via the adjoint action, Ad(g)x gxg^-1.
For x∈ nilpotent, we write (x) for the Jordan type of x, which is the partition of K recording the Jordan block sizes of x in Jordan canonical form. The set of all x∈ with a fixed Jordan type μ is an orbit of under the adjoint action of G, which we denote by _μ.
Given x∈, the Springer fiber associated to x is
_x = {F_∙∈(K)| xF_i ⊆ F_i for all i}.
The isomorphism type of _x only depends on (x), and thus we may write _μ for any x∈_μ.
Springer discovered that these varieties have the remarkable property that the symmetric group S_K acts on the cohomology ring H^*(_μ;) and (in Lie type A) the top nonzero cohomology group is an irreducible Specht module,
H^top(_μ;) ≅ V_μ.
More generally, Hotta and Springer <cit.> proved that
(H^*(_μ;)) = H_μ(x;q).
In <cit.>, Levinson, Woo, and the second author introduced the Δ-Springer varieties that generalize the Springer fibers and give a geometric realization of the symmetric function in the Delta Conjecture at t=0.
Let n,λ,s be as in Definition <ref>, and let K = |Λ| = k + (n-k)s = n+(n-k)(s-1). Let x be a nilpotent K× K matrix with Jordan type Λ, and let P be a parabolic subgroup of G = GL_K with block sizes α = (1^n,(n-k)(s-1)), so that = G/P corresponds to partial flags (F_1⊂ F_2⊂⋯⊂ F_n ⊂ F_n+1) with (F_i) = i for i≤ n and F_n+1 = ^K. The Δ-Springer varieties are defined to be
Y_n,,s{F_∙∈| xF_i⊆ F_i for all i and F_n ⊇im(x^n-k)}.
Recall that we write k=|λ|. When k=n (and s is arbitrary), Y_n,,s≅_λ, so these varieties generalize the Springer fibers.
Levinson, Woo, and the second author proved that the Δ-Springer varieties Y_n,λ,s have several geometric and combinatorial properties that generalize those of Springer fibers:
* Y_n,λ,s is equidimensional of dimension 𝐧() + (n-k)(s-1).
* There is an S_n action on H^*(Y_n,λ,s;).
* The top cohomology group is a skew Specht module H^top(Y_n,λ,s;) ≅ V_Λ/((n-k)^s-1).
* H^*(Y_n,λ,s) has a presentation as a quotient of the polynomial ring [x_1,…, x_n] which coincides with the ring R_n,,s introduced in <cit.>. In the special case = (1^k) and s=k, the cohomology ring coincides with the generalized coinvariant rings of Haglund, Rhoades, and Shimozono, H^*(Y_n,(1^k),k;) = R_n,k.
Notably, in the special case when = (1^k) and s=k, then
(H^*(Y_n,(1^k),k;)) = (R_n,k)=ω∘(Δ'_e_k-1e_n|_t=0),
so Y_n,(1^k),k gives a geometric realization of the symmetric function in the Delta Conjecture at t=0 (up to a minor twist).
§.§ Rational smoothness and intersection cohomology
A complex variety X of complex dimension n is rationally smooth if either of the following equivalent conditions is satisfied:
* For all x∈ X, H^i(X,X-x;) is for i=2n and 0 for i≠ 2n.
* For all x∈ X, the local intersection cohomology is trivial, meaning IH^i_x(X;) = for i=0 and 0 for i≠ 0.
Here IH^*_x is the middle local intersection cohomology, see <cit.>. See <cit.> for a proof of the fact that (1) and (2) above are equivalent. We do not define intersection cohomology here, but the essential property of local intersection cohomology that we need is that for x∈_μ,
∑_k q^k (IH_x^2k(_ν;)) = q^-𝐧(ν)K_ν,μ(q),
which is a result due to Lusztig <cit.>. See also <cit.> for more details and related results.
In particular, (<ref>) reflects the fact that
_ν = ⋃_μ≼ν_μ,
where ≼ is dominance order on partitions of the same size, defined by μ≼ν if μ_1 + ⋯ +μ_i ≤ν_1 + ⋯ + ν_i for all i <cit.>.
We will need the next fact, which follows easily from the Relative Künneth Formula for the local cohomology of a product space.
Suppose f:X→ Y is a fiber bundle with fiber F such that both F and Y are rationally smooth. Then X is also rationally smooth.
§.§ Borho and MacPherson's partial resolutions
Let P be a parabolic subgroup, and let = G/P be the corresponding partial flag variety. Let L be the Levi subgroup associated to P, let _L be the nilpotent cone of L, and finally let = ⊕ be the Levi decomposition of = (P), where = (L) and is the nilradical of .
Explicitly, P is the set of invertible block upper triangular
matrices with block sizes given by some composition α of K, G/P is the variety of partial flags (V_1⊆ V_2⊆⋯⊆ V_ℓ) of ^K with (V_i/V_i-1) = α_i for all i, and L is the subgroup of invertible block diagonal matrices with block sizes given by α. The Lie algebra _L is the set of nilpotent block diagonal matrices, is the set of block upper triangular matrices, and is the set of block diagonal matrices, with block sizes given by the parts of α.
For K=7 and P the parabolic subgroup with block sizes α = (3,1,1,2), the Levi decomposition = ⊕ has the form
[ ∗ ∗ ∗ ∗ ∗ ∗ ∗; ∗ ∗ ∗ ∗ ∗ ∗ ∗; ∗ ∗ ∗ ∗ ∗ ∗ ∗; 0 0 0 ∗ ∗ ∗ ∗; 0 0 0 0 ∗ ∗ ∗; 0 0 0 0 0 ∗ ∗; 0 0 0 0 0 ∗ ∗ ]
=
[ ∗ ∗ ∗ 0 0 0 0; ∗ ∗ ∗ 0 0 0 0; ∗ ∗ ∗ 0 0 0 0; 0 0 0 ∗ 0 0 0; 0 0 0 0 ∗ 0 0; 0 0 0 0 0 ∗ ∗; 0 0 0 0 0 ∗ ∗ ]⊕[ 0 0 0 ∗ ∗ ∗ ∗; 0 0 0 ∗ ∗ ∗ ∗; 0 0 0 ∗ ∗ ∗ ∗; 0 0 0 0 ∗ ∗ ∗; 0 0 0 0 0 ∗ ∗; 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 ]
Borho and MacPherson defined the partial resolutions of the nilpotent cone, defined by
ξ:𝒩^P G×_P (𝒩_L + 𝔫) →𝒩,
where ξ(g,x) = (g)x=gxg^-1. Here, the ×_P notation denotes that we are taking the quotient of the product space by the P action p· (g,x) = (gp^-1,(p)x).
In type A, ^P has the following alternative description in terms of partial flags,
^P ≅{(F_∙,x)∈ G/P×| xF_i⊆ F_i for all i},
where ξ is the projection onto the second factor. In particular, when P=B then _L = 0 and are the strictly-upper triangular matrices, and hence we recover the usual Springer resolution, which we denote by π : = ^B→𝒩.
Given t∈_L, let _t = (L)t. Let y = (1,t + u)∈^P for arbitrary u∈.
The subspaces
_y G×_P (_t + )
partition ^P as t varies over t∈_L. Since _y is a fiber bundle over G/P with fiber _t+, taking the closure we have
_y = G×_P(_t + ),
which can be seen by taking the closure on each trivializing open subset of G/P.
The usual Springer resolution π:→ factors through ξ. Letting ξ_y be the restriction of ξ to _y, we have the following commutative diagram.
[d,"η"][bend left =40,dd,"π"]
_y [r, hook] [d,"ξ_y"] ^P[d,"ξ"]
[r,"="]
Given x∈, the generalized Springer fiber is _x^yξ_y^-1(x) = _y∩ξ^-1(x). Note that the ordinary Springer fibers _x are recovered when P=B is the full Borel subgroup (and y=0).
In type A, the variety _x^y can alternatively be described in terms of partial flags as follows. Given (g,x')∈_x^y, let F_∙∈ be the partial flag corresponding to gP. That is, letting e_1,…, e_K be the standard basis vectors of ^K (not to be confused with the elementary symmetric polynomials), then F_i = span{ge_1,…,ge_α_1+⋯ + α_i}. Since (g,x')∈_x^y, then by definition x = (g)x', and it can be checked that xF_i⊆ F_i for all i. Thus, x induces a nilpotent endomorphism of F_i/F_i-1 for all i, which we denote by x|_F_i/F_i-1. Letting t = t_1+⋯ +t_ℓ be the block decomposition of t, it then follows from (<ref>) that
_x^y ≅{F_∙∈| xF_i ⊆ F_i for all i and (x|_F_i/F_i-1) ≼(t_i) for all i}.
Let (L)_t be the Springer fiber of t in the flag variety (L)≅(α_1)×⋯×(α_ℓ) for the group L. In other words,
(L)_t ≅ ((α_1))_t_1×⋯× ((α_ℓ))_t_ℓ.
Borho and MacPherson showed that η^-1(y)≅(L)_t. We write d_y = _(η^-1(y)) = _((L)_t).
Let ρ(t,1) be the irreducible representation of W_L = S_α_1×⋯× S_α_ℓ on H^top((L)_t;). In other words, ρ(t,1) ≅ V_(t_1)⊗⋯⊗ V_(t_ℓ) as a W_L-module. Given a W-module V, recall that V^ρ(t,1) is the isotypic component corresponding to ρ(t,1) of the restriction of V to a W_L-module. Observe that the “partial Weyl group” W^P = N_G(L)/L of permutations of the blocks of L of equal size acts on V^ρ(t,1).
The partial Weyl group W^P = N_G(L)/L acts on H^*(_x^y;), and the Springer action of W = S_K restricts to an action of W^P on H^*(_x;)^ρ(t,1).
Furthermore, if _y is rationally smooth at all points of _x^y, then there is a graded isomorphism of W^P-modules
H^i(_x^y;) ⊗ H^2d_y((L)_t;) ≅ H^i+2d_y(_x;)^ρ(t,1)
for all i, where W^P acts trivially on the second factor of the tensor product.
§ PROOF OF THE MAIN THEORM
In this section, we prove Theorem <ref> using the geometry of Borho–MacPherson partial resolutions. Readers interested in the combinatorial applications of the formula may skip to Section <ref>.
We begin with a technical lemma that will help us apply Theorem <ref> to our setting of Δ-Springer varieties.
Let _μ be the (G)-orbit of elements of with Jordan type μ. For μ a rectangular partition μ = (a^b) (so n=ab) then
_μ = ⋃_ν⊢ n,
ν_1≤ a_ν.
By (<ref>), the statement of the lemma is equivalent to: ν≼ (a^b) if and only if ν_1≤ a. In the forward direction, if ν≼ (a^b), then ν_1≤ a follows by definition of dominance order. For the converse, suppose that ν_1≤ a, so that ν_i≤ a for all i, since ν is a partition. Then ν_1+⋯ + ν_i ≤ a· i, which is the sum of the first i parts of (a^b), so the lemma follows.
Let x be a nilpotent K× K matrix such that (x) = Λ_n,,s, and let F_∙∈ Y_n,,s. Letting x|_^K/F_n be the nilpotent endomorphism of ^K/F_n induced by x, we have (x|_^K/F_n) ⊆ ((n-k)^s).
The statement of the lemma is independent of conjugating x by an invertible matrix. We choose x to be of the following form: Label the Young diagram of Λ_n,,s with the standard basis vectors e_1,…,e_K in order from right to left along each row, bottom to top.
For example, when n=5, =(2,1), and s=3, then we have the labeling
e_9 e_8
e_7 e_6 e_5
e_4 e_3 e_2 e_1
.
Define x to be the K× K matrix such that xe_i = e_j if e_i is in the cell immediately to the left of e_j, and xe_i = 0 if e_i is in the right-most cell in its row. Then im(x^n-k) is the span of the k vectors in the cells of Λ that are in columns >n-k from the left in the Young diagram (in this case, e_1,e_2,e_5). Since F_∙∈ Y_n,,s, then F_n⊇im(x^n-k). Thus, the Jordan type of x|_^K/F_n is contained in (x|_^K/im(x^n-k))= ((n-k)^s).
Let α = (1^n,K-n), (x) = Λ_n,,s, and (t_i) = (1) for i≤ n and (t_n+1) = ((n-k)^s-1). Then _x^y ≅ Y_n,λ,s.
Given F_∙∈, then by (<ref>), F_∙∈_x^y if and only if (x|_F_i/F_i-1) ≼(t_i) for all i. For α=(1^n,K-n), this is equivalent to (x|_^K/F_n) ≼ ((n-k)^s-1), which by Lemma <ref> is equivalent to (x|_^K/F_n)_1 ≤ n-k.
We claim that (x|_^K/F_n)_1≤ n-k if and only if F_∙∈ Y_n,,s. Indeed, the reverse direction follows by Lemma <ref>. We prove the forward direction by proving the contrapositive: Suppose F_∙∉ Y_n,,s, meaning im(x^n-k)⊈F_n. Then there exists some nonzero v∈im(x^n-k)∖ F_n. The transpose operator x^t is the linear operator defined by x^te_i = e_j if and only if e_i is in the cell immediately to the right of e_j, and x^t e_i = 0 if e_i is in the first column. Then v,x^tv,(x^t)^2v,…, (x^t)^n-kv∉ F_n since xF_n ⊆ F_n. Furthermore, it can be checked that they are linearly independent vectors. Choosing a basis of ^K/F_n that includes these n-k+1 vectors shows that (x|_^K/F_n)_1 ≥ n-k+1. Thus, the claim is proved, and it follows that F_∙∈_x^y if and only if F_∙∈ Y_n,,s.
Let α = (1^n,K-n), (x) = Λ_n,,s, and (t_i) = (1) for i≤ n and (t_n+1) = ((n-k)^s-1). In this case, the hypotheses of Theorem <ref> hold: _y is rationally smooth at all points of _x^y.
Given (g,x')∈_x^y, then x' = (g^-1)x ∈_t +. Let F_∙ be the partial flag corresponding to gP, meaning F_i = span{ge_1,…, ge_i} for i≤ n. By Lemma <ref>, we have (x|_^K/F_n)⊆ ((n-k)^s).
Since x' = (g^-1) x, we have a commutative diagram
^K/F_n [r,"x"] ^K/F_n
^K/span{e_1,…, e_n}[r,"x'"][u,"g"] ^K/span{e_1,…, e_n}[u,"g"]
which implies that (x|_^K/F_n) = (x'|_^K/span{e_1,…,e_n}).
Thus, the Jordan type of the last diagonal block of x' has length at most s. Thus,
x' ∈_t∖ Z + where
Z ⋃_t' ∈_t,
ℓ((t'_n+1)) > s_t'.
Note that Z is a closed subvariety of _t.
We thus have _x^y ⊆ G×_P(_t∖ Z + ). We claim that, since G×_P(_t∖ Z + ) is an open subset of _y, it suffices to show that _t∖ Z is rationally smooth.
Indeed, the space G×_P(_t∖ Z + ) is homeomorphic to the fiber product (G×_P _t∖ Z)×_ (G×_P) (of fiber bundles over ). Since G×_P is smooth, then it suffices to check that _t∖ Z is rationally smooth by Lemma <ref>.
Equivalently, we must show that _((n-k)^s-1)∖ Z' is rationally smooth, where
Z' ⋃_ν⊢ (n-k)(s-1),
ℓ(ν) > s_ν.
Since local intersection cohomology only depends on a neighborhood of u and Z' is a closed subvariety, it suffices to show that for all u∈_((n-k)^s-1)∖ Z',
IH_u^i(_((n-k)^s-1);) =
if i=0
0 if i≠ 0.
Now, by Lemma <ref>, u∈_((n-k)^s-1)∖ Z' if and only if u∈_μ for some μ such that μ_1≤ n-k and ℓ(μ) ≤ s, which is equivalent to μ⊆ ((n-k)^s). By (<ref>), for u∈_μ we have
∑_k q^k (IH_u^2k(_((n-k)^s-1);)) = q^-𝐧((n-k)^s-1)K_((n-k)^s-1),μ(q).
But for μ⊢ (n-k)(s-1) such that μ⊆ ((n-k)^s), K_((n-k)^s-1),μ(q) = q^𝐧((n-k)^s-1) by Lemma <ref> below. Thus, the right-hand side of (<ref>) is 1. Thus, _((n-k)^s-1)∖ Z is rationally smooth, and _y is rationally smooth at all points of _x^y.
Suppose μ⊢ ab for two positive integers a and b such that μ⊆ (a)^b+1. Then K_(a^b),μ(q) = q^𝐧((a)^b).
There is a unique semistandard Young tableau T with content μ and shape (a)^b. Indeed, if T is such a semistandard Young tableau, since ℓ(μ) ≤ b+1 and the shape of T has b+1 rows, there is exactly one letter from 1,…, b+1 missing from each column of T. Since T has content μ, then it has a-μ_i many columns that do not have the letter i, and there is only one way of arranging these columns into a semistandard Young tableau T (the missing letters from each column must weakly decrease from left to right).
We now compute the cocharge of T. We claim that the cocharge subscript of each letter is equal to one less than its row index. Each letter i is either in row i or in row i-1 by construction; let the entries that are in their own row be called left entries of T and let the others be right entries; notice that the left entries are separated from the right by a down-and-right path. It follows that each cocharge subword consists of left entries 1,2,…,i-1 in their respective rows for some i, followed by a sequence of right entries i,i+1,…,b+1 in rows i-1,…,b respectively. Because the cocharge subword only wraps around at the jump from left to right entries, each subscript is equal to the row that the entry is in at every step.
Finally, it follows that (T)=a·b2=𝐧((a^b)), and the result follows.
For a=5, b=3, and μ=(4,4,4,3), the tableau T in the proof of Lemma <ref> is
( 3 3444, 2 2 233, 1 1 1 12).
The left entries are shown in boldface, and the right entries are normal font. The cocharge subscript of each letter is one less than its row, and the cocharge is 5·32=15.
We now can prove Theorem <ref>, which we restate here.
We have
H_n,λ,s(x;q) = 1/q^s-12(n-k) s_((n-k)^s-1)^⊥H_Λ(x;q)
Observe that for P the parabolic of type (1^n,K-n), then W^P≅ S_n. Combining Theorem <ref>, Lemma <ref>, and Lemma <ref>, we have an isomorphism of graded S_n-modules (where S_n acts trivially on the second tensor factor)
H^i(Y_n,λ,s;) ⊗ H^2d_y((L)_t;) ≅ H^i+2d_y(_x)^ρ(y,1).
Recall that d_y = _ℂ(η^-1(y)) = _ℂ((L)_t). Note that in this case, (L)_t ≅_t_n+1.
Since (t_n+1) = ((n-k)^s-1), then (H^2d_y((L)_t;)) = (V_((n-k)^s-1)).
We have ρ(y,1) ≅ V_(1)⊗⋯⊗ V_(1)⊗ V_(s-1)^n-k as W_L = S_1×⋯× S_1× S_(s-1)(n-k)-modules. Thus, since d_y=𝐧((n-k)^s-1)=s-12(n-k), we have
(V_((n-k)^s-1)) (H^*(Y_n,,s;)) = q^s-12(n-k)(V_Λ^V_((n-k)^s-1)).
Theorem <ref> then follows by rearranging and applying Lemma <ref>.
In the proof of Theorem <ref> above, we have implicitly used the fact that the S_n action on H^*(Y_n,,s;) here is the same as the one in <cit.>. The action defined in <cit.> was by permutations of the first Chern classes of the tautological quotient line bundles F_i/F_i-1 for i≤ n. The fact that this matches the action of W^P on H^*(Y_n,,s;) follows from the fact that it is compatible with the W=S_K action defined by Borho and MacPherson on the Springer fiber H^*(_x;), which in type A is well known to be the same as the action of S_K by permutations of the first Chern classes of the tautological line bundles, see <cit.>.
As an immediate corollary of Theorem <ref>, we see how the formula in <cit.> for the top cohomology of Y_n,,s follows immediately from the skewing formula.
We have an isomorphism of S_n modules,
H^top(Y_n,,s;) ≅ S^Λ/((n-k)^s-1),
where S^Λ/((n-k)^s-1) is the skew Specht module corresponding to the skew partition Λ/((n-k)^s-1).
By Theorem <ref>, the top degree of H_n,,s(x;q) = (H^*(Y_n,,s;)) is
s_((n-k)^s-1)^⊥ s_Λ = s_Λ/((n-k)^s-1),
which is the graded Frobenius character of S^Λ/((n-k)^s-1).
§ PROOFS OF THE COCHARGE AND CHARGE FORMULAS
In this section, we use Theorem <ref> to prove Theorems <ref> and <ref>.
§.§ Proof of Theorem <ref>
We now deduce the cocharge formula (Theorem <ref>) from Theorem <ref>.
In particular, we wish to show that
s_((n-k)^s-1)^⊥H_Λ(x;q)=∑_T∈𝒯^+(n,,s) q^(T)s_^+(T).
Recall also the following skewing formula for applying an adjoint Schur operator to another Schur function:
s_λ^⊥ s_μ=s_μ/λ
We will prove the following more general lemma, from which Equation (<ref>) immediately follows. Define a generalized battery-powered tableau with (not necessarily rectangular) battery shape ρ and content μ to be a pair (D,B) of semistandard Young tableaux such that (B)=ρ and the total content of D∪ B is μ. Write 𝒯^+(ρ,μ) to be the set of all such pairs T=(D,B), and write ^+(T)=(D).
We have
s_ρ^⊥H_μ(x;q)=∑_T∈𝒯^+(ρ,μ)q^(T)s_^+(T)
where (T)=(D· B)=(D∪ B) where D∪ B is formed by placing B down-and-right of D.
Let (μ) be the set of semistandard Young tableaux of content μ (of any shape), and let (ν,μ) be the set of semistandard Young tableaux of shape ν and content μ. From the Lascoux-Schützenberger formula (<ref>) for Hall-Littlewood polynomials, the left hand side above expands as
s_ρ^⊥∑_T∈(μ)q^(T)s_(T) = ∑_ν∑_T∈(ν,μ)q^(T)s_ν/ρ
= ∑_ν∑_T∈(ν,μ)∑_η q^(T)c^ν_η,ρ s_η
where c^ν_η,ρ is the Littlewood-Richardson coefficient. For any fixed SSYT T of shape ν, we may interpret c^ν_η,ρ as the number of pairs (D,B) of semistandard Young tableaux of shapes η and ρ respectively such that D· B=T (see <cit.>). Since is invariant under jeu de taquin and RSK insertion, we have (D∪ B)=(D· B)=(T). Thus the sum above becomes
∑_ν∑_T∈(ν,μ)∑_D· B=T
(B)=ρq^(D∪ B) s_(D) =∑_(D,B)∈𝒯^+(ρ,μ)q^(D∪ B)s_(D)
=∑_T∈𝒯^+(ρ,μ)q^(T)s_^+(T)
as desired.
From line (<ref>) above, setting μ=Λ_n,(1^k),k and ρ=((n-k)^k-1), we can also deduce
⟨ s_μ, ω∘(Δ'_e_k-1e_n)|_t=0) = 1/q^k-12(n-k)∑_ν⊢ k(n-k+1) c_μ,((n-k)^k-1)^νK_ν,((n-k+1)^k)(q).
Corollary <ref> follows immediately by applying the operator.
§.§ Proof of Theorem <ref>
We now deduce the charge version of the main result, Theorem <ref>, from Theorem <ref>. For any partition ν, recall that 𝐧(ν)=∑_i (i-1)ν_i.
The maximum value of (T) for T∈𝒯^+(n,,s) is
𝐧(λ)+s2(n-k).
Moreover, there is precisely one battery-powered tableau T with this value of for each device shape ν with ℓ(ν)≤ s and where ν/λ is a horizontal strip (and no tableaux with this value of for other device shapes).
The maximal cocharge among all words of a given content Λ occurs when each cocharge subword has its letters appearing in order from right to left, and in that case the cocharge is 𝐧(Λ). For this to occur, the battery columns must be filled with 1,2,…,s-1 from bottom to top, for otherwise some entry of the battery B would be to the right of the previous element in its cocharge subword. The subwords starting at the 1's in the bottom of B will then contain 1,2,…,s from right to left, with the s being in the device.
For the cocharge subwords starting at 1's in the device D to be in right to left order, D must contain the unique tableau D' of content λ and shape λ (with λ_i entries i in the i-th row for all i). So, D is formed by adding a horizontal strip of length n-k labeled by s to D' such that the result is semistandard. Thus there is one tableau of maximal cocharge for each shape of height ≤ s formed by adding a horizontal strip to λ.
For such pairs (D,B), we have (D,B)=𝐧(Λ)=𝐧(λ)+s2(n-k), as desired.
Dividing out by the factor q^s-12(n-k), we obtain the following corollary.
The top q-degree of the polynomial on the right hand side of Theorem <ref> is d:=𝐧(λ)+(s-1)(n-k), and the coefficient of q^d is ∑ s_ν where the sum ranges over all partitions ν of n with ℓ(ν)≤ s and ν/λ a horizontal strip.
The value d matches with the formula given for the top degree of _q(R_n,,s) in <cit.>. In <cit.>, it was shown that the coefficient of q^d is the skew Schur function s_Λ/((n-k)^s-1). A straightforward application of the Littlewood-Richardson rule shows that this agrees with our formula in Corollary <ref>, and we refer to <cit.> for details.
Finally, we show that Theorems <ref> and <ref> are equivalent. Taking the q-reversal of both sides of Theorem <ref>, we have
(H_n,λ,s) = ∑_T∈𝒯^+(n,λ,s) q^𝐧(λ)+(n-k)(s-1)-(T)+s-12(n-k)s_^+(T).
Then the exponent 𝐧(λ)+(n-k)(s-1)-(T)+s-12(n-k) is equal to 𝐧(Λ)-(T), which is simply (T) by the definition of charge. This gives Theorem <ref>.
§ THE CASE
In this section, we give a second proof of Theorem <ref> in the case when s=2 using combinatorial bijections and previously known formulas for H_n,,s. We start by recalling the Hall-Littlewood expansion of H_n,,s.
§.§ Hall-Littlewood expansion
In <cit.>, it is shown that H_n,,s has the following expansion in terms of Hall-Littlewood polynomials.
H_n,λ,s(X;q)=(∑_μ⊢ n,
μ⊃λ,
ℓ(μ)≤ s q^𝐧(μ/λ)∑_α=(α_1,…,α_s) n,
α⊃λ, (α)=μ q^(α) H_μ(x;q)),
where α = (α_1,…,α_s) n indicates that α is a weak composition of n with s parts, 𝐧(μ/λ)=∑_i μ_i'-λ_i'2, and coinv(α) is the number of pairs i<j with α_i<α_j.
Note that if α is a composition such that α⊃, then since λ is a partition we also have (α)⊃λ. Thus we can rearrange the summation above as
H_n,λ,s(X;q)=(∑_α=(α_1,…,α_s) n,
α⊃λ q^𝐧(α/λ)+(α) H_(α)(x;q))
where the quantity 𝐧(α/λ) above is defined to be 𝐧(μ/λ) where μ=(α).
Can be rewritten as
H_n,λ,s(X;q)=(∑_μ⊢ n,
μ⊃λ,
ℓ(μ)≤ s q^𝐧(μ/λ)∏_iμ_i'-λ_i+1'μ_i'-μ_i+1'_q H_μ(x;q)),
where μ'_0 s.
Substituting (<ref>) into (<ref>) yields
(H_n,λ,s(X;q))=∑_α=(α_1,…,α_s) n,
α⊃λ∑_T∈((α)) q^𝐧(α/λ)+(α)+(T) s_(T).
Thus, to prove Theorem <ref> it suffices to show that
∑_T∈𝒯^+(n,,s) q^(T)s_^+(T)=∑_α=(α_1,…,α_s) n,
α⊃λ∑_U∈((α)) q^𝐧(α/λ)+(α)+(U) s_(U).
In particular, it suffices to find a shape-preserving bijection from 𝒯^+(n,,s) to
𝒜(n,λ,s):={(α,U)| α=(α_1,…,α_s) n, α⊃λ, U∈((α))}
such that, if T∈𝒯^+(n,,s) maps to (α,U)∈𝒜(n,λ,s), then (T)=(U)+𝐧(α/λ)+(α). In the next subsection, we find such a bijection in the case s=2.
§.§ Combinatorial proof for
For the remainder of this section, let λ=(λ_1,λ_2) be a partition of size k with λ_1≥λ_2≥ 0, and let Alpha(n,λ,2) be the set of all (weak) compositions α = (α_1,α_2) of size n such that α⊃λ.
For α∈Alpha(n,λ,2), define φ(α) to be the composition formed by taking 𝐧(α/λ)+(α) boxes from the bottom row of (α) and moving them to the top row.
As a running example, let n=11, = (3,1), s=2, and α = (5,6). Then 𝐧(α/) + (α) = 2+1=3. Since (α) = (6,5), then φ(α) = (3,8).
The map φ on compositions is a bijection from Alpha(n,λ,2) to itself.
We first show that if α∈Alpha(n,λ,2) then φ(α)∈Alpha(n,λ,2). Indeed, we have (α)=0 or 1 according to whether α_1≥α_2 or α_1<α_2, and 𝐧(α/λ) is the number of columns of α to the right of column λ_1 containing two squares. Thus 𝐧(α/λ)+(α) is at most max(α_1,α_2). Since φ(α) is formed by moving 𝐧(α/λ)+(α) from (α)_1=max(α_1,α_2) to (α)_2, we have φ(α)_1≥λ_1, and so φ(α) still contains λ.
We now show that φ:Alpha(n,λ,2)→Alpha(n,λ,2) is surjective (and hence bijective). Let β∈Alpha(n,λ,2). If β_2<λ_1, then φ(β)=β. Otherwise, let d=β_2-λ_1.
If d is even, say d=2r, then set α=(n-(λ_1+r),λ_1+r). Notice that the first λ_1 columns of β contain 2λ_1 squares, and there are at least 2r squares in the remaining columns, so n≥ 2λ_1+2r. Thus n-λ_1-r≥λ_1+r, and so α is a partition, with (α)=0. The same inequality also shows that α⊃λ. Thus 𝐧(α/λ)=r and it follows that φ(α)=β.
If d is odd, say d=2r+1, then set α=(λ_1+r,n-λ_1-r). The same calculation as above shows that α⊃λ and α is not a partition, so (α)=1. Furthermore, we again have 𝐧(α/λ)=r, so φ(α)=β.
We now construct a bijection from 𝒜(n,λ,2) to 𝒯^+(n,λ,s) as follows.
Let (α,U)∈𝒜(n,λ,2). Define ψ(α,U) to be the tableau formed by changing 1's to 2's in the bottom row of U, starting with the rightmost 1 and moving leftwards, until we obtain a tableau of content φ(α).
Continuing our running example with α = (5,6), letting U be the following tableau with (U)=2, then ψ(α, U) is as below:
U =
2 2 2
1 1 1 1 1 1 2 2
, ψ(α,U) =
2 2 2
1 1 1 2 2 2 2 2
The tableau ψ(α,U) is not necessarily semistandard; it may have columns containing two 2's.
Let (α,U)∈𝒜(n,λ,2). Define Φ(α,U) as follows. First, compute ψ(α,U), and append 1's to the left of the bottom row and 2's to the left of the top row until the resulting tableau S has content Λ (and then left-justifying). Then, unbump a horizontal strip of size n-k from S from right to left to form a tableau T of the same shape as U, and an unbumped row of length n-k that acts as the battery of T. We set Φ(α,U)=T.
For our running example, we have Λ_n,,s = (10,8) and
Φ(α,U) =
2 2 2
1 1 1 1 1 1 1 1
1 1 2 2 2 2 2
so that (Φ(α,U)) = 5 = (U) + 𝐧(α/)+(α).
The tableau T=Φ(α,U) is always well defined and in 𝒯^+(n,λ,2).
We first note that the intermediate tableau S in Definition <ref> is semistandard, even though ψ(α,U) does not have to be; since S has partition content Λ and all of the 1's are in the bottom row, this follows immediately.
Now, since the shape of S contains the shape of U, we can unbump the appropriate horizontal strip from right to left to form T. The resulting letters that were bumped out are in weakly decreasing order from right to left, and therefore form a valid 1× (n-k) battery for T. Finally, since S has content Λ by default, the conclusion follows.
If T=Φ(α,U) then (T)=(U)+𝐧(α/λ)+(α).
Note that (U) is the number of 2's on the bottom row of U. Therefore, the charge of the tableau S formed from U in Definition <ref> is equal to
(S)=(U)+𝐧(α/λ)+(α)
since this is the total number of 2's on the bottom row. When we unbump, the charge of the tableau T union with the battery is the same as (S) since charge is invariant under Knuth equivalence. Thus (T)=(S) and the conclusion follows.
The map Φ is a bijection from 𝒜(n,λ,2) to 𝒯^+(n,λ,2).
We reverse Φ as follows. Given a tableau T∈𝒯^+(n,λ,2), insert its battery to form a tableau S. Then remove 1's from the bottom row and 2's from the top row so that the remaining letters in each row, when left justified, forms a (not necessarily standard) tableau U' of shape ^+(T).
Now, if β is the content of U', we change 2's to 1's in the bottom row to form a tableau U of content α=φ^-1(β). The pair (α,U) is our output.
Once we show that this process is well defined, it is clear that it reverses each step of Φ. The insertion process to form S is known to be well defined. For the next step, to show there are enough 1's and 2's to remove from S to form a tableau U' of shape ^+(T), certainly the top row is long enough since it is at least as long as the top row of T. For the bottom row, since the battery that we inserted had length n-k, we have to remove at most n-k squares containing 1 from S, and since Λ=(n-k+λ_1,n-k+λ_2), there are at least n-k such squares.
For the last step, by Proposition <ref> it suffices to show that β∈Alpha(n,λ,2), that is, that the composition β contains λ. Since there are n-k+λ_1 squares labeled 1 in S and we remove at most n-k of them to form U', we have that β_1, the number of 1's in U', is at least λ_1. Similarly β_2≥λ_2, and we are done.
§ THE COEFFICIENT IN THE CASE
We now consider the setting in which λ=(1^k) and s=k, so that R_n,,s=R_n,k and H_n,λ,s = (R_n,k), and give a direct combinatorial proof of Theorem <ref> for the coefficient of s_(n) in this setting. We recall the positive Schur expansion of (R_n,k) given in <cit.>. An ordered set partition, or OSP, of n is a partition of {1,2,…,n} into a disjoint union of subsets called blocks, along with an ordering of the blocks from left to right. For instance, (45|367|28|19) denotes an OSP of 9.
A descent of a permutation π is an index d such that π_d>π_d+1, and the major index of π is the sum of its descents. The minimaj of an OSP, first introduced in the context of the Delta conjecture in <cit.>, is the major index of the minimaj word formed by ordering each block's entries from least to greatest and then reading the letters in the OSP from left to right. For instance, the associated word to (45|367|28|19) is 453672819, and it has descents in positions 2,5,7, so the minimaj is 2+5+7=14.
The reading word rw(P) of an OSP P (different from its minimaj word) is formed by reading the smallest entry of each block from right to left, and then the remaining entries from left to right. For instance, the reading word of (45|367|28|19) is 123456789.
It was shown in <cit.> (using the work of <cit.>) that there is a more general set of ordered multiset partitions into k blocks, 𝒪𝒫_n,k, and a minimaj statistic on them such that
ω∘( R_n,k)=∑_π∈𝒪𝒫_n,kq^minimaj(π)x^wt(π)
where wt(π) is the tuple whose i-th term is the number of i's in π. In <cit.>, a crystal structure is given on ordered multiset partitions that is compatible with the minimaj statistic, thereby grouping the terms of the above monomial expansion into a Schur expansion:
( R_n,k)=ω∘∑_π∈𝒪𝒫_n,k
e_i(π)=0 ∀ iq^minimaj(π)s_wt(π)=∑_π∈𝒪𝒫_n,k
e_i(π)=0 ∀ iq^minimaj(π)s_wt(π)^∗,
where e_i are the raising operators of the crystal, which we define below.
In particular, the coefficient of s_(n) in the above expansion (taking into account the conjugation via ω) is equal to
∑_P∈𝒪𝒫_n,k, wt(P)=(1^n)
e_i(P)=0 ∀ i q^minimaj(P)=∑_P∈OSP(n,k)
e_i(P)=0 ∀ i q^minimaj(P)
where OSP(n,k) is the set of ordered set partitions with entries 1,2,…,n and k blocks.
The crystal raising operators e_i were defined in <cit.> via the reading word described above. In particular, e_i(P)=0 if and only if, in the reading word, the number of i's is always greater than or equal to the number of i+1's as we read the word from left to right. Thus if P has content (1^n), we have e_i(P)=0 for all i if and only if the reading word of P is 123⋯ n. Thus the coefficient of s_(n) in ((R_n,k)) is
∑_P∈OSP(n,k)
rw(P)=123⋯ n q^minimaj(P).
On the other hand, the coefficient of s_(n) in the charge formula of Theorem <ref> is
∑_T∈𝒯^+(n,(1^k),k)
^+(T)=(n) q^(T).
To prove that (<ref>) and (<ref>) are equal via combinatorial methods, we first prove a lemma about charge, and then we define a bijection f from the set of tableaux T appearing in the sum (<ref>) to the OSPs in (<ref>) as follows.
Given T∈𝒯^+(n,(1^k),k) such that ^+(T) = (n), the charge labels of the battery of T are always either 0 or 1, with the 1 labels being precisely on the entries of the battery that are larger than their row index. Furthermore, all charge labels in the device are 0 except in the final charge word which is 123⋯ k in order.
We proceed by induction on n-k. In the base case when n-k=0, the battery is empty, so T has content Λ = (1^n) in this case, so there is only one charge word which consists of the entire row labeled 12⋯ n in order (where n=k), so the base case holds.
Letting n-k>0 and T∈𝒯^+(n,(1^k),k) such that ^+(T) = (n), let i be minimal such that i does not appear in row i of the battery, or i=k if such an i does not exist. Then since ^+(T) = (n), the first charge word of T consists of the last j entry of row j of the battery for each j < i, together with the right-most i in the device, and the right-most j of the battery in row j-1 for i<j≤ k. Thus, the charge labels for j≤ i are 0 and for j>i they are all 1.
Deleting i from the device and left-justifying, and deleting the other entries of the first charge word from the battery and left justifying each row of the battery, we get a battery-powered tableau T'∈𝒯^+(n-1,(1^k),k) with ^+(T') = (n-1). The charge labels for the entries of T' are the same as the charge labels of the corresponding cells of T. By our inductive hypothesis, we are done.
Given T∈𝒯^+(n,(1^k),k) with shape (n), define f(T) to be the ordered set partition constructed as follows. Let f(T) have exactly k blocks B_1,…,B_k in that order, which initially contain k,k-1,k-2,…,1 respectively. Then let m_i be the number of i's in the device of T, and place the numbers k+1,k+2,…,n into the blocks from left to right in the unique way so that each block B_i has size m_i for all i. The resulting OSP is f(T).
An example of f(T) is depicted in Figure <ref>.
The assignment T↦ f(T) is a bijection from the set of all tableaux T∈𝒯^+(n,(1^k),k) such that ^+(T) = (n) to the set of P∈OSP(n) such that rw(P) = 123⋯ n. The map f is weight preserving, meaning that (T) = minimaj(f(T)).
To show f is well defined, observe that Λ_n,(1^k),k=((n-k+1)^k), and so T has exactly n-k+1 copies of each letter from 1 through k. Since the battery of T has n-k columns, then there must be at least one of each i≤ n in the device of T. In the notation of Definition <ref>, we thus have m_i≥ 1 for all i, so f(T) is a well-defined OSP. By its construction, the reading word of f(T) is 123⋯ n, and the process is reversible since there is a unique way to fill the one-row device and the battery for any sequence of block sizes m_i. Thus f is a bijection.
We now prove that f is weight-preserving, sending to minimaj. Indeed, by Lemma <ref>, the final charge subword, which is 123⋯ k in order, has charge k2. This is the minimaj value formed by placing k,k-1,…,1 in the blocks from left to right. For each i in the device of T that is not in the final charge subword, the charge labels of the i+1,…, k in the charge subword of i are all 1, so i contributes k-i to charge. In terms of minimaj, adding an extra element to B_i increases the minimaj corresponding to blocks B_i+1,…, B_k, and thus results in an increase of k-i. Thus, placing the remaining letters in the blocks increases the minimaj by precisely the amount of charge stored in the battery.
§ SKEWING FORMULAS FOR THE DELTA CONJECTURE AT LOW DEGREES
It is natural to ask whether our skewing formula for the Delta Conjecture at t=0 extends to the full Delta Conjecture symmetric function. In this section, we give several conjectures of such expansions below. Each formula may be expanded in order to obtain a positive Schur expansion.
In the case n=4, k=3, the skewing formula generalizes to the following skewing formula for the full Delta Conjecture symmetric function.
ωΔ'_e_2 e_4 = s_(1,1)^⊥(H_(2,2,2)(x;q) + (t(1+q)+t^2)H_(3,2,1)(x;q)
+ (t^2(1+q) + 2t^3
+ t^4)H_(4,2)(x;q) + (t^3 + t^4 + t^5)H_(5,1)(x;q)),
which in turn gives a Schur-positive expansion for Δ'_e_2e_4 after expanding each Hall-Littlewood polynomial in terms of charge.
Similarly, for n=5 and k=3, we have
ωΔ'_e_2e_5 = s_(2,2)^⊥( H_(3,3,3) + t(1+q)H_(4,3,2) + (t^2(1+q) + t^3 + t^4)H_(5,3,1) + t^3H_(4,4,1)
+ (t^3+t^4+t^5)H_(5,4) + t^3H_(6,2,1) + (t^3+2t^4+2t^5+t^6)H_(6,3)
+(t^4+2t^5+t^6+t^7)H_(7,2)).
Alternatively, the terms t^3(H_(6,2,1) + H_6,3) may be replaced with t^3((q+2)H_(6,3) + H_(7,2)).
In general, ωΔ'_e_k-1 e_n does not have an expansion as a single s_λ^⊥ applied to a positive sum of Hall-Littlewood polynomials - for instance, the t^4 coefficient in ωΔ'_e_3e_5 is not Hall-Littlewood positive (and it is known that s_^⊥ applied to a Hall-Littlewood polynomial is Hall-Littlewood positive). That being said, in the conjectures below we find some formulas of this form for the coefficients of low-degree powers of t.
Let [n]_q = 1+q + ⋯ + q^n-1 and nm_q = [n]_q/[m]_q[n-m]_q be the usual q-analogues.
The coefficient of t^1 in ωΔ'_e_k-1e_n (as a polynomial in t with coefficients in symmetric functions over [q]) is
[k-1]_q · s_((n-k)^k-1)^⊥ H_(n-k+2,(n-k+1)^k-2,n-k)(x;q)
where (n-k+2,(n-k+1)^k-2,n-k) is shorthand for the partition (n-k+2,n-k+1,n-k+1,… n-k+1,n-k) with k-2 copies of the part n-k+1.
The t^2 coefficient of ωΔ'_e_k-1e_n is
s_((n-k)^k-1)^⊥( [k-2]_q H_(n-k+2,(n-k+1)^k-2,n-k)(x;q)
+ k-22_q H_((n-k+2)^2,(n-k+1)^k-4,(n-k)^2)(x;q)
+ [k-1]_q H_(n-k+3,(n-k+1)^k-2,n-k-1)(x;q)).
We have checked both Conjectures <ref> and <ref> computationally up to n=8 for all k≤ n.
In the case of k=2, we have the following formula for the full Delta Conjecture symmetric function.
Before we state the formula, we recall the Littlewood-Richardson rule for skew Schur functions in the case of two-row partitions. Given = (_1,_2) and μ = (μ_1,μ_2) partitions,
s_μ^⊥ s_ = s_/μ = ∑_ν⊢ |/μ| c_μ,ν^ s_ν(x)
where c_μ,ν^ is the number of semistandard Young tableaux T of skew shape /μ with content ν whose reverse reading word is Yamanouchi, meaning that if one reads the labels of T in reverse reading order, there are never more 2s than 1s up to any given point.
For k=2, we have
ωΔ'_e_1e_n = h_n-2^⊥∑_i=0^n-1 H_(n-1+i,n-1-i)(x;q)t^i.
By <cit.>,
ωΔ'_e_1e_n = ωΔ_e_1e_n - ω e_n = -s_(n) + ∑_i=0^⌊ n/2⌋ s_(n-i,i)(x)∑_p=i^n-i [p]_q,t,
where [p]_q,t = ∑_j=0^p-1 q^j t^n-1-j.
Notice that the Hall-Littlewood term H_(n-1+i,n-1-i)(x;q) on the right hand side of (<ref>) expands as ∑_j=0^n-1-i q^j s_n-1+i+j,n-1-i-j in the Schur basis, by examining the charge expansion version of (<ref>) in this two-row case. Starting from the right-hand side of (<ref>),
h_n-2^⊥∑_i=0^n-1 H_(n-1+i,n-1-i)(x;q)t^i = h_n-2^⊥(∑_i=0^n-1∑_j=0^n-1-i t^iq^j s_(n-1+i+j,n-1-i-j))
=s_n-2^⊥(∑_ℓ=0^n-1 s_(n-1+ℓ,n-1-ℓ)(q^ℓ+q^ℓ-1t+⋯+t^ℓ))
= s_n-2^⊥(∑_ℓ=0^n-1 s_(n-1+ℓ,n-1-ℓ)[ℓ+1]_q,t).
We now examine the coefficient of s_(n-i,i) in (<ref>). Applying the Littlewood-Richardson rule to compute s_n-2^⊥ s_(n-1+ℓ,n-1-ℓ) over all ℓ, we have that [ℓ+1]_q appears once in the coefficient of s_(n-i,i) if and only if there exists a Littlewood-Richardson tableau of skew shape (n-1+ℓ,n-1-ℓ)/(n-2) and content (n-i,i) (and note that since we are in the two-row case, there can only be one such tableau if it exists).
There are two inequalities that govern the existence of such a tableau in the case when i≥ 1 and hence there is at least one 2. First, the number of 2s cannot exceed the number of entries in the bottom row (which must all be 1) by the Yamanouchi condition, so we have i≤ (n-1+ℓ)-(n-2)=ℓ+1. Second, the number of 2s naturally cannot exceed the size of the top row, and so i≤ n-1-ℓ. Solving these two inequalities for ℓ, we find i-1≤ℓ≤ n-1-i. Finally, all such fillings are semistandard, since the only shape that has a vertical domino is when ℓ=0, and it has a unique vertical domino, so having at least one 2 (due to our assumption that i>1 in this case) guarantees the existence of the desired Littlewood-Richardson tableau. It follows that the coefficient of s_(n-i,i) is equal to ∑_ℓ=i-1^n-1-i[ℓ+1]_q,t=∑_p=i^n-i[p]_q,t.
Finally, we examine the coefficient of s_(n). The same analysis as above goes through, except in the case that ℓ=0, when the constructed tableau would not be semistandard. Thus the coefficient of s_(n) is -1+∑_p=0^n[p]_q,t, and we are done.
Alternatively, all of the formulas in this section may be written as formulas for Δ'_e_k-1e_n in terms of q-Whittaker polynomials ω H_μ(x;q) by applying ω to both sides and replacing the operator s_((n-k)^k-1)^⊥ with s_((k-1)^n-k)^⊥.
§ NEXT DIRECTIONS
The new results and connections to geometry in this paper open up several natural directions for further investigation.
Are the Δ-Springer varieties the only family of Borho–MacPherson _x^y varieties that have sufficient rational smoothness properties to obtain a simple Schur expansion for the graded Frobenius of their cohomology rings? If not, which others may lead to useful combinatorial formulas?
This paper rests in type A, but the Borho–MacPherson paper is type independent, so we also ask the following.
Is there a natural extension of Δ-Springer varieties to all Lie types that has combinatorial meaning?
On the combinatorics side, since Corollaries <ref> and <ref> give formulas for the t=0 specialization of the Delta Conjecture, and Section <ref> gives conjectures for other t degrees, we also ask whether we can extend these formulas to the full Delta Conjecture symmetric functions for all t degrees.
Can Δ'_e_k-1e_n be obtained by applying a t-analogue of a skewing operator to a Macdonald polynomial, generalizing Corollary <ref>? Does Corollary <ref> have a q,t-analog that gives a Schur expansion or other formula relevant to the Delta Conjecture?
Finally, the proofs in this paper rely heavily on the deep geometric, topological, and representation-theoretic machinery developed by Borho and MacPherson. We would like to see a combinatorial proof along the lines of the Lascoux–Schützenberger proof of the Hall-Littlewood cocharge formula (see <cit.> for a modern exposition of this proof).
Is there a more direct combinatorial or algebraic proof of Theorem <ref>?
In particular, in Section <ref>, we used the known Schur expansion of <cit.> for the R_n,k case in terms of minimaj to give a second proof that the formula of Theorem <ref> holds for the s_(n) coefficient. Is there a generalization of the minimaj Schur expansion to the setting of H_n,λ,s that would allow us to obtain a combinatorial proof for the s_(n) coefficient in the general case?
The companion paper <cit.> will also investigate combinatorial routes towards Theorem <ref> via a new formula in terms of Compositional Shuffle Theorem creation operators <cit.>.
Combining Theorem <ref> and (<ref>), our result gives a formula for the symmetric function s_((n-k)^s-1)^⊥H_Λ as a positive sum of Hall-Littlewood polynomials. Furthermore, by <cit.> there is also a formula for e_j^⊥H_ν for any j and ν as a sum of Hall-Littlewood polynomials.
Is there a combinatorial formula for s_μ^⊥H_ν in terms of Hall-Littlewood polynomials that generalizes the expansion (<ref>) to all μ and ν?
|
http://arxiv.org/abs/2307.00722v1
|
20230703030111
|
Implications for the Supermassive Black Hole Binaries from the NANOGrav 15-year Data Set
|
[
"Yan-Chen Bi",
"Yu-Mei Wu",
"Zu-Cheng Chen",
"Qing-Guo Huang"
] |
astro-ph.CO
|
[
"astro-ph.CO",
"gr-qc",
"hep-ph"
] |
[4]
x̅
ρ_eq
z_eq
λ̃
β
δ
Δ
j⃗
l⃗
x̂
ŷ
j
𝒥
𝒫
M_⊙
ℳ
≈
[1]⟨#1 ⟩
[1]Eq. (<ref>)
α
X_∗
f_pbh
θ⃗
λ⃗
d⃗
M_min
d
m_min
m_max
ℛ
ℛ̃
σ
Ω_GW
[add ref]
Ω
γ
Γ
Gpc^-3 yr^-1
[1]Eq. (<ref>)
[1]Fig. <ref>
[1]Table <ref>
LIGO/Virgo
[1]Sec. <ref>
e.g.
SNR
ϵ
𝐧
𝐝
𝐚
ϵ
ν
𝐭
θ
ϵ
U
log-U
RN
BN
GN
𝒩
GW
yr
𝒜
𝒟
ℋ
BAYESEPHEM
Ω_GW^ST
Ω_GW^TT
Ω_GW^VL
Ω_GW^SL
β
γ_PL
A_PL
12
∂
α'
12
()
[
]
[1]#1
[email protected] Key Laboratory of Theoretical Physics,
Institute of Theoretical Physics, Chinese Academy of Sciences,Beijing 100190, ChinaSchool of Physical Sciences, University of Chinese Academy of Sciences, No. 19A Yuquan Road, Beijing 100049, China
[email protected] of Fundamental Physics and Mathematical Sciences, Hangzhou Institute for Advanced Study, UCAS, Hangzhou 310024, ChinaSchool of Physical Sciences,
University of Chinese Academy of Sciences,
No. 19A Yuquan Road, Beijing 100049, China
[email protected] of Astronomy, Beijing Normal University, Beijing 100875, ChinaAdvanced Institute of Natural Sciences, Beijing Normal University, Zhuhai 519087, ChinaDepartment of Physics and Synergistic Innovation Center for Quantum Effects and Applications, Hunan Normal University, Changsha, Hunan 410081, China
[email protected] of Fundamental Physics and Mathematical Sciences, Hangzhou Institute for Advanced Study, UCAS, Hangzhou 310024, [email protected] Key Laboratory of Theoretical Physics,
Institute of Theoretical Physics, Chinese Academy of Sciences,Beijing 100190, ChinaSchool of Physical Sciences,
University of Chinese Academy of Sciences,
No. 19A Yuquan Road, Beijing 100049, China
NANOGrav, EPTA, PPTA, and CPTA have announced the evidence for a stochastic signal from their latest data sets. Supermassive black hole binaries (SMBHBs) are supposed to be the most promising gravitational-wave (GW) sources of pulsar timing arrays. Assuming an astro-informed formation model, we use the NANOGrav 15-year data set to constrain the gravitational wave background (GWB) from SMBHBs. Our results prefer a large turn-over eccentricity of the SMBHB orbit when GWs begin to dominate the SMBHBs evolution.
Furthermore, the GWB spectrum is extrapolated to the space-borne GW detector frequency band by including inspiral-merge-cutoff phases of SMBHBs and should be detected by LISA, Taiji and TianQin in the near future.
Implications for the Supermassive Black Hole Binaries from the NANOGrav 15-year Data Set
Qing-Guo Huang
August 1, 2023
========================================================================================
Introduction.
Supermassive black holes (SMBHs), with masses from 10^5 to 10^11, are thought to reside in the centers of nearly all galaxies <cit.>. The details of their formation, evolution and interaction with host galaxies need to be clarified. In the scenario of galaxies coalescence, SMBHs hosted in their nuclei sink to the center of the remnant due to the interaction with the surrounding ambient and eventually form bound supermassive black hole binaries (SMBHBs) <cit.>. The SMBHBs subsequently harden because of dynamical interaction with the dense background <cit.> until the gravitational wave (GW) takes over at sub-parsec separation, forming an abundant population GW sources in around nHz frequencies <cit.>. Moreover, the interaction with the stellar environment and scattering of ambient stars potentially attenuate the GW spectrum at the lower end of PTA ranges and tend to increase the binary eccentricity <cit.>, which leaves imprints on the GW emission.
Therefore, detecting or constraining such signals will help to reveal the nature and properties of SMBHBs (for a review see <cit.>). Since galaxies are observed to merge quite frequently <cit.> and the observable Universe encompasses several billions of them, a largish cosmological population of SMBHBs is expected to produce the stochastic gravitational-wave background (SGWB) <cit.>. The SGWB is going to be captured by pulsar timing arrays (PTAs) <cit.>. SGWBs affect the time of arrivals (TOAs) of radio pulsar from millisecond pulsars (MSPs) in a correlated way <cit.>. PTAs search for SGWB using the Hellings & Downs curve. Besides detecting the SGWB at nano-Hz frequency, the final goal of PTAs is to extract useful astrophysical information from their data.
Recently, the North American Nanohertz Observatory for Gravitational Waves (NANOGrav <cit.>), the European Pulsar Timing Array (EPTA <cit.>), the Parkes Pulsar Timing Array (PPTA <cit.>), and the Chinese Pulsar Timing Array (CPTA <cit.>) have
announced the evidence for a stochastic signal consistent with a SGWB, brings us a great opportunity to prove the imprints of SMBHBs <cit.>.
SMBHBs are also one of the most promising GW sources for space-borne GW detectors, such as LISA <cit.>, Taiji <cit.> and TianQin <cit.>, in the frequency band around 10^-4∼ 10^-1 Hz. However, the expected event rate of SMBHB sources in space-borne detectors is still unknown. During the SMBHBs evolution, the inspiral phase falls into the range of PTA while the merger phase in the sensitive interval of space-borne GW observatories <cit.>. This insight indicates that the observation of PTA and space-borne GW detector are complementary <cit.>. With the joint observation, the complete description of SMBHBs will be well constrained.
In this letter, we use the NANOGrav 15-year data set to constrain the GWB from SMBHBs, and find that the SMBHBs orbits have a large eccentricity when GWs begin to dominate the SMBHBs evolution. In addition, we extrapolate the GWB spectrum from 10^-9 Hz to 10^-1 Hz (see spect), providing a promising GW source for the space-borne GW detectors.
SGWB from SMBHB.
SGWB from SMBHBs is the most promising target of PTA. In general SMBHB is supposed to form following the merge of two galaxies. After galaxies merging, their central SMBHs sink into the center of the merger remnant and form a bound binary. Initially the binary orbit shrinks due to the energy and angular momentum exchanging with surrounding stars and cold gas. Then GW radiation will dominate the evolution at turn-over frequency f_t (corresponding to initial eccentricity e_0), bringing the binary to final coalescence <cit.>. The spectrum of SGWB is composed from the sum of all SMBHBs emitting at a given observed frequency f. The present-day energy density of SGWB (f) are given by <cit.>
(f) = 8 π G f /3 H_0^2 c^2∫ dz dd^2 n/dz ddE_ GW/df_r ,
where H_0 = 67.4 km sec^-1 Mpc^-1<cit.> is the Hubble constant, f_r = (1+z)f is the source-frame GW frequency. = M q^3/5 / (1 + q)^1/5 is the chirp mass of SMBHB, where M is the primary SMBH mass and q is the SMBHB mass ratio. Here, d^2n/dz d is the SMBHB population
and dE_ GW/df_r is the energy spectrum of single SMBHB. The relevant ranges in the integrals used here are 0 ≤ z ≤ 5 and 10^5 ≤≤ 10^11.
The SMBHB population d^2 n/dz d in ogw2 are estimated from astrophysical observation. It is widely acknowledged that nearly every galaxy has a SMBH at their centers. This relation can be used as an astrophysical model to describe the merger rate. In this astro-informed formation model, the astronomical surveys serve relatively strong constrains naturally and SMBHB population is allowed to interact with the environment. The population can be expressed as <cit.>d^2 n/dz^' d = ∫d^3 n_G/dz^' dM_G dq_GdM_G/dMdq_G/dqdM/d dq ,
where M_G is the primary galaxy mass and q_G is the mass ratio between the two paired galaxy, q = q_G^α_* is the SMBHB mass ratio mentioned before. The relations used for transforming the galaxy mass M_G into the primary black hole mass M are given as <cit.>
M_ bulge/M_G = {[ √(6.9)/(log M_G - 10)^1.5exp-3.45/log M_G - 10 - 0.615 iflog M_G ≥ 10; 0.615 iflog M_G ≤ 10 ].
M = 𝒩{ M_*M_ bulge/10^11, ϵ} ,
where 𝒩{x,y} is a log normal distribution with mean x and standard deviation y. {M_*, α_*, ϵ} are the model parameters for translation.
The galaxy differential merger rate per unit redshift, galaxy mass and mass ratio d^3n_G/dz^' dM_G dq_G can be written as <cit.>
d^3n_G/dz^' dM_G dq_G = Φ(M_G,z)/M_G ln 10ℱ(M_G,z,q_G)/τ(M_G,z,q_G)dt_r/dz .
where z stands for the redshift of galaxy pair and z^' is the redshift at galaxy pair merging . The quantitative relation between z and z^' is addressed in <cit.>. dt_r/dz is the relationship between time and redshift assuming a ΛCDM flat Universe as follow
dt_r/dz = 1/H_0(1+z)√((Ω_M(1+z)^3 + Ω_k(1+z)^2 + Ω_Λ)) .
Φ(M_G,z) = dn_G/d log_10 M_G is the galaxy mass function (GSMF) measured at redshift of galaxy pair z, the explicit expression is <cit.>Φ(M_G,z) = 10^Φ(z)ln 10 M_G/M_G0^1 + α(z)exp-M_G/M_G0 ,
where the parameters are Φ(z) = Φ_0 + z Φ_I, α(z) = α_0 + z α_I.
{Φ_0, Φ_I, α_0, α_I, M_G0} are the five model parameters for GSMF.
ℱ(M_G,z,q_G) is the differential pair fraction with respect to the mass ratio of galaxy pair q_G and is written as <cit.>ℱ(M_G,z,q_G) = df_ pair/dq_G = f_0^'M_G/a M_G0^α_f1+z^β_f q_G^γ_f .
where aM_G0 = 10^11 is an arbitrary reference mass <cit.>.
{ f_0^', α_f, β_f, γ_f } are the four model parameters for pair function.
τ(M_G,z,q_G) is the merger timescale of the pair galaxy and can be expressed as <cit.>τ(M_G,z,q_G) = τ_0 M_G/b M_G0^α_τ1+z^β_τ q_G^γ_τ .
where bM_G0 = 0.4/h_0 × 10^11 is an arbitrary reference mass <cit.>.
{τ_0, α_τ, β_τ, γ_τ} are the four model parameters for merger timescale.
To sum up, the present-day energy density (f) of SGWB from SMBHB in galaxy model is therefore fully specified by a set of eighteen model parameters:
{Φ_0, Φ_I, M_G0, α_0, α_I} for the GSMF, {f_0^', α_f, β_f, γ_f} for the pair fraction, {τ_0, α_τ, β_τ, γ_τ} for the merger timescale, { M_*, α_*,ϵ} for galaxy-SMBH transforming relation and {e_0, ζ_0 } mentioned below for the SMBHB energy spectrum. The detailed descriptions of parameters are addressed in Table.<ref>.
The energy spectrum of single SMBHB dE_ GW/df_r are calculated using its self-similarity.
So spectrum in any configuration
can be obtained from a reference spectrum by shift and re-scaling.
The fiducial redshift and chirp mass of reference spectrum are set as z_0 = 0.02 and _0 = 4.16 × 10^8, respectively. The basic idea of so-called shift and re-scaling could be found in <cit.>.
Past calculations of SMBHBs SGWB spectrum do not consider full spectrum. In most scenarios, only circular inspiral of SMBHBs is seriously taken into account. In this letter, the inspiral-merger-cutoff phases are jointed smoothly following methods proposed by <cit.>. We express the complete description of energy spectrum dE_ GW/df_r as follow
dE_ GW/df_r(f_r<ν_1) = π c^2 f/4 G h^2_ c, fitf f_ p,0/f_ p,tf_ p,0/f_ p,t^-4/3
×/_0^5/31 + z/1 + z_0^-1/3
dE_ GW/df_r(f_r∈ [ν_1,ν_2)) = (G π)^2/3^5/3/3ω_1 f_r^2/3
dE_ GW/df_r(f_r∈ [ν_2,ν_3)) = (G π)^2/3^5/3/3ω_2[f_r/1+(f_r-ν_2/σ/2)^2]^2
where ω_1=ν_1^-1, ω_2=ν_1^-1ν_2^-4/3. The set of parameters (ν_1,ν_2,σ,ν_3) can be determined by two physical parameters: the total mass M_ total = M (1+q) and the symmetric mass ratio η=(q M^2/M_ total^2) in terms of (a η^2+bη+c)/(π GM_ total/c^3), with coefficients a,b,c given by Table.<ref>. The ratio f_ p,0/f_ p,t for shift is illustrated as <cit.>f_ p,0/f_ p,t = f_0/f_t
e_ ref/e_0^12/191 - e_0^2/1 - e_ ref^2( 304 + 121 e_ ref^2/ 304 + 121 e_0^2)^870/2299
^3/2 ,
here f_0 = 10^-10 Hz and e_ ref = 0.9 are the reference frequency and ecctricity, respectively. e_0 is the initial eccentricity we choose. Once e_0 is selected, turn-over frequency f_t is obtained by
f_t = 0.356 nHzρ_ i,100/F(e) σ_200ζ_0^3/10_9^-2/5 ,
where
F(e) = 1 + (73/24)e^2 + (37/96)e^4/(1-e^2)^7/2 ,
_9 = /(10^9 ) is the rescaled chirp mass, ρ_ i,100 = ρ_i / (100 pc^-3) is the velocity diepersion of stars in the galaxy and ζ_0 describes the density of the stellar environment.
h^2_ c, fit is the analytical fitting function of reference spectrum and takes the form as <cit.>
h_ c, fit = a_0 f̅^a_1 e^-a_2 f̅ + b_0 f̅^b_1 e^-b_2 f̅ + c_0 f̅^- c_1 e^-c_2 / f̅ ,
where a_i, b_i, c_i, listed in Table.<ref>, are constants determined by fitting and f̅ = f/(10^-8) Hz.
Data and results.
The NANOGrav collaboration has performed an analysis on the 15-yr data set by employing a free spectrum that enables independent variations in the amplitude of the GW spectrum across different frequency bins. In the analyses, we use the posterior data from NANOGrav <cit.>, and the <cit.> package to perform the Markov Chain Monte Carlo sampling. All of the parameters and their prior distributions are listed in tab:galaxypara. We note that these constrained prior distributions are based on those presented in <cit.> and are derived from observational and theoretical works on the measurement of the GSMF, galaxy pair fraction, merger timescale and SMBH-host galaxy scaling relations.
The resulting posterior distribution is illustrated in post_18params.
Our analysis reveals that the detected stochastic signal provides some new insights into the differential pair fraction ℱ(M_G,z,q_G), merger timescale τ(M_G,z,q_G), galaxy-SMBH mass scaling M_G-M_bulge-M relation, and the initial states of the SMBHBs when GW emission takes over, such as the eccentricity and the transition frequency f_t, compared to other astrophysical observations.
Specifically, the preference for a higher value of the parameter f_0 and the positive-skewed parameters α_f and β_f suggest larger differential pair fractions in more massive galaxies, while the preference for a lower value of the parameter τ_0 and the negative-skewed parameters α_τ and β_τ indicate shorter merger timescales in more massive galaxies, and the preference for a higher value of M_* corresponds
to a higher normalization between the galaxy bulge mass M_bulge and SMBH mass M.
The above parameters entirely contribute to the observed relatively high amplitude of the SGWB spectrum (Ω_GW=0.93^+1.17_-0.41× 10^-8 at yr^-1). Note that the posterior distributions of these parameters are very similar to those reported in <cit.>, where the NANOGrav 12.5-yr data set was used. This is because the spectrum amplitudes in both data sets are statistically consistent. On the other hand, the parameters e_0 and ζ_0, which determine the shape of the SGWB spectrum, display sharp contrasts in the posteriors obtained from the two data sets. For the NANOGrav 15-yr data set, the distribution of e_0 indicates that SMBHs exhibit a large initial eccentricity when transitioning into the GW-emission dominated process, while the larger value of the parameter ζ_0 implies that massive galaxies have, on average, higher densities than what is suggested by a standard Dehnen profile <cit.>.
Implication for GWs in the space-borne detector frequency band.
In the PTA frequency band, SMBHBs are in the inspiral phase, and their radiation power can be calculated using dedf1. After a prolonged period of mutual inspiral, these binaries gradually transition to more circular orbits and enter the merge and ringdown phase, characterized by GW radiation described by dedf2 and dedf3. Some of these black holes undergo merger and final coalescence at higher frequencies, entering the space-borne GW detector frequency band. Now we can deduce the properties of the supermassive black hole binary population from the PTA results, and further combine ogw2 with Eqs.(<ref>-<ref>) to obtain the complete SGWB spectrum generated by MBHBs spanning both the PTA and LISA/Taiji/TianQin frequency bands, as depicted in spect.
We need to emphasize that the general consensus for the GW detection of the cosmic history of SMBHBs is that PTAs primarily detect the SGWB from the ensemble of the SMBHB population, while LISA/Taiji/TianQin is expected to detect the final coalescence stage of individual systems. However, during the initial stages of the detector in operation, we cannot directly resolve individual sources, and it is reasonable to consider these sources as constituting a SGWB. In fact, as depicted in spect, the spectrum of the SGWB is sufficiently strong that it is very likely to be detected very soon once the detectors are in operation.
The relation between merger rate and SMBHB chirp mass is depicted in fig:distribution.
The merger rate of SMBHBs with chirp masses in the range 10^5 - 10^11 is 0.049 yr^-1≲ℛ≲ 30.58 yr^-1 .
Summary.
In this letter, we use the NANOGrav 15-year data set to constrain the SGWB from SMBHBs, implying that the SMBHBs tend to have a large initial eccentricity in the transition phase between interaction domination and GW domination. The SGWB spectrum from SMBHBs is extrapolated from the PTA frequency band to the space-borne GW detector frequency band. Our results indicate that such a SGWB from SMBHBs should be also detected by LISA/Taiji/TianQin in the near future.
Acknowledgements.
We acknowledge the use of HPC Cluster of ITP-CAS. QGH is supported by the grants from NSFC (Grant No. 12250010, 11975019, 11991052, 12047503), Key Research Program of Frontier Sciences, CAS, Grant No. ZDBS-LY-7009, CAS Project for Young Scientists in Basic Research YSBR-006, the Key Research Program of the Chinese Academy of Sciences (Grant No. XDPB15).
ZCC is supported by the National Natural Science Foundation of China (Grant No. 12247176 and No. 12247112) and the China Postdoctoral Science Foundation Fellowship No. 2022M710429.
|
http://arxiv.org/abs/2307.02867v1
|
20230706090820
|
Towards a safe MLOps Process for the Continuous Development and Safety Assurance of ML-based Systems in the Railway Domain
|
[
"Marc Zeller",
"Thomas Waschulzik",
"Reiner Schmid",
"Claus Bahlmann"
] |
cs.SE
|
[
"cs.SE",
"cs.AI",
"cs.LG",
"cs.SY",
"eess.SY"
] |
Spin and orbital Edelstein effect in a bilayer system with Rashba interaction
Annika Johansson
August 1, 2023
=============================================================================
Traditional automation technologies alone are not sufficient to enable driverless operation of trains (called Grade of Automation (GoA) 4) on non-restricted infrastructure. The required perception tasks are nowadays realized using Machine Learning (ML) and thus need to be developed and deployed reliably and efficiently.
One important aspect to achieve this is to use an MLOps process for tackling improved reproducibility, traceability, collaboration, and continuous adaptation of a driverless operation to changing conditions.
MLOps mixes ML application development and operation (Ops) and enables high frequency software releases and continuous innovation based on the feedback from operations. In this paper, we outline a safe MLOps process for the continuous development and safety assurance of ML-based systems in the railway domain. It integrates system engineering, safety assurance, and the ML life-cycle in a comprehensive workflow. We present the individual stages of the process and their interactions. Moreover, we describe relevant challenges to automate the different stages of the safe MLOps process.
§ INTRODUCTION
With the introduction of driverless train operation, the so-called Grade of Automation (GoA) 4 on non-restricted infrastructure, the attractiveness of railway systems can be increased significantly. This includes the densification of the timetable, e.g., by splitting vehicles that would otherwise run in multiple traction or increased flexibility in timetable design to achieve demand-oriented train services.
Traditional automation technologies alone are not sufficient to enable the driverless operation of trains. However, Artificial Intelligence (AI) and Machine Learning (ML) offer great potential to realize the mandatory novel functions to replace the tasks of a human train driver, such as obstacle detection on the tracks using ML techniques for computer vision <cit.>.
Assuring safety for a driverless system in complex scenarios on non-restricted railway infrastructure is still unsolved in general. Further, we see the dilemma that the most promising path to a solution is currently seen in using ML approaches, however, ML is imposing additional challenges to formally assuring safety. In the following, we assume that ML components are part of the approach.
Under this assumption, we foresee that different tasks need to be addressed, such that driverless systems for rail can be approved for operation, including
* Systems view: Addressing systems aspects, including linking formal requirements originating from functional safety (e.g., originating from EN 50126<cit.>), to the Automated Driving System (ADS) and formalizing a sound safety case
* Safe AI principles and tools: Providing insight into the ML behavior and how it relates to data and further to the requirements
* Safe MLOps cycle: Addressing the challenge that ADS also in rail will operate in an open world, which is difficult to specify a-priori and prone to changes during its lifecycle, hence requires agile MLOps cycles including testing & validation in the field
The safe.trAIn[https://safetrain-projekt.de/en] project aims to lay the foundation for the safe use of AI/ML to achieve the driverless operation of rail vehicles.
The project goals are to develop guidelines and methods for the quality assured development of ML components that are able to support the safety assurance of ML in driverless train operation. Based on the requirements for the homologation process in the railway domain, safe.trAIn creates a safety argumentation for an AI-based obstacle detection function of a driverless regional train. Therefore, the project investigates methods to develop trustworthiness AI-based functions taking data quality, robustness, uncertainty, and transparency/explainability, aspects of the ML model into account. Moreover, the methods need to be integrated into a comprehensive and continuous development, verification, and assessment process for driverless trains.
In this contribution, we focus on the third aspect mentioned earliner. We have other workstreams in the safe.trAIn project along aspects 1 & 2, for the sake of focus we will address them not in this paper.
While there has been some work on the safety assurance of different ML techniques including Deep Neural Networks (DNNs), it is often neglected, that the system incorporating the ML model must be developed and deployed reliably, efficiently, and in a transparent and traceable way. Especially, since an ML model in a safety-related context needs a rigorous development process like any kind of safety-relevant software. However, there are currently no safety standards available, which describe a development or assurance process for safety-critical ML-based systems.
Traditionally, safety-critical system incorporating software in the railway domain are developed, tested, and validated during design. When the software is updated after the product release, this is done only in a time interval of several months or years, since the re-assessment process takes a lot of time. Therefore, the safety standards EN 50126-1 <cit.>, EN 50128 <cit.>, EN 50657 <cit.>, and EN 50129 <cit.> provide both a development process and guidelines for safety assurance.
In contrast to traditional software, which ensures the correctness in every corner case, ML models are created based on a data set, which can not cover all possible corner cases.
Thus, it is very likely that the ML model will be updated continuously over the product lifetime. There are multiple reasons for this <cit.>: (1) changes necessary due to detected anomalies during operation; (2) changes due to aging of demand of the ML-based system (e.g. change of the operating context/environment such as domain drift); (3) change in legislation (e.g. new test cases get mandatory, which leads to an update (re-training) of ML model).
Therefore, an MLOps process is required which allows the continues development, verification, and safety assessment of the ML-based functions of a driverless train.
MLOps is a close relative to DevOps. DevOps mixes the development (Dev) and operation (Ops) phases of a software product by promoting high frequency software releases which enable continuous innovation based on feedback from operations <cit.>. MLOps is an ML engineering culture and practice that aims at unifying ML application development (Dev) and ML application operation (Ops).
In this paper, we outline a so-called safe MLOps process for the continuous development and safety assurance of ML-based systems in the railway domain. It integrates system engineering, safety assurance, and ML life-cycle into a comprehensive and continuous process. Moreover, we describe relevant challenges to speed-up the development of ML-based systems by automating each stage of the safe MLOps process.
The rest of the paper is organized as follows: In the next section, we briefly summarize relevant related work which provides a foundation for our safe MLOps process. Afterwards, we introduce the safe MLOps process for ML-based systems in the railway domain as defined in the safe.trAIn project.
At the end, we summarize the main results of the paper and provide an outlook on future research work.
§ RELATED WORK
Promoted by the internet companies, MLOps[https://cloud.google.com/architecture/mlops-continuous-delivery-and-automation-pipelines-in-machine-learning] establishes a DevOps methodology for machine learning applications and is today the de-facto standard for unifying ML model development, deployment, and operation. However, existing MLOps approaches do not address system-level aspects of ML models <cit.> as well as safety assurance related activities.
Although there is no standardized process for the development and assurance of ML-based systems in the railway domain, there are already approaches in other domains.
The UL 4600 (Evaluation of Autonomous Products) standard <cit.> focuses on autonomous driving. It covers the safety principles, tools, and techniques, to design and develop fully automated products. The UL 4600 defines topics that must be addressed to create a safety case for an autonomous product, but it does not recommend specific technologies or methods to be used. Moreover, it is based on the development and assurance lifecycle of the ISO 26262-2 standard <cit.> and does not define explicit ML development or assurance stages.
The whitepaper "Safety First for Autonomous Driving" <cit.> produced by 11 automotive companies and key technology providers illustrates a development and validation process for Deep Neural Networks (DNNs) consisting of the four stages "Define", "Specify", "Develop and Evaluate", and "Deploy and Monitor". In each of the stages, safety artifacts are created which support the safety case. In the whitepaper the challenges that need to be addressed in each of the activities are listed. However, the paper only addresses the development and assurance of an ML model and does not put this into the context of engineering the system. On the other hand, <cit.> presents a detailed process for the development and assurance of ML models, which can be incorporated in the development and assurance process specified in the ISO 26262-2 standard for automotive systems. But the approach focuses on a V-based development process and does not take aspects of MLOps into account to enable continuous or agile product development.
In the avionic domain, the EASA concept paper <cit.> outlines a W-shaped learning assurance process.
The aim of those systematic actions is to substantiate that errors in a data-driven learning process have been identified and corrected such that the system satisfies the applicable requirements at an adequate level of confidence. However, the paper does not provide any details how to integrate the data-driven learning process into the system development and safety assessment process in the avionic domain (ARP 4754A <cit.> & ARP 4761 <cit.>).
With CRISP-ML(Q) <cit.> there is a proposal for an iterative process model for ML models which includes a quality assurance methodology. However, this approach was not designed for safety-relevant systems and solely focuses on quality assurance methods to mitigate risks in the development of the ML model.
A process model for machine learning in the context of the Software development process has been captured as a formal process model in <cit.>. It considers the interaction of ML model development with the software development process and the required continuous evolution after initial training and testing.
Though safety assurance has not been taken into account, this provides a starting point for defining MLOps in the context of a system development, and the definition of automated pipeline concepts based on the identified interaction points.
Furthermore, there is a domain-independent approach for a development and assurance process of safety-relevant and ML-based autonomous systems by the University of York. The so-called Assurance of Machine Learning for use in Autonomous Systems (AMLAS) Process consists of a 6-stage ML lifecycle with the stages "ML safety assurance scoping", "ML safety requirements elicitation", "data management", "model training", "model verification", and "deployment". For each activity safety assurance requirements are listed and potential methods that support each requirement are discussed. Moreover, AMLAS establishes the fundamental link between system-level hazard and risk analysis and component-level safety requirements. Since AMLAS solely focuses on the safety assurance of the ML model, the integration into the system engineering process is not covered.
In <cit.> a process model for the engineering of DNNs with the scope of supporting trustworthiness assurance is presented. This process extends the VDE AR E 2842-61-2 standard <cit.> which outlines a generic framework for the development of trustworthy autonomous systems. The paper describes different development steps for DNNs and illustrates potential methods for trustworthiness assurance in each of the stages. It also outlines how the development and assurance of DNNs links to the development and assurance of the overall system. This approach follows the standard V-model and is not compatible with iterative or agile design practices.
In <cit.> the QUEEN development method is defined for quality assured development of feed forward neural networks. This process combines system engineering approaches to develop appropriate prepossessing stages to reduce the complexity of the task that has to be solved by the machine learning component. To assure that the prepossessing is really reducing the complexity of the given task, complexity measures are introduced for supervised learning data sets. This complexity measures can only be applied if the number of dimensions of the input space is beyond about 10.000 dimensions and if the inputs are at least somehow related to the desired output. The measures give additional support for architectures using the combination of conventional prepossessing with machine learning. The reduction of the complexity of the problem that has to be solved by the ML component reduces the challenge to explain the function that is implemented by the ML model. Additionally, QUEEN introduces local quality indicators that support the quality assurance of data sets. These quality indicators may also be integrated into the MLOps process to support the quality assurance of supervised learning data sets after preprocessing. In <cit.> an example is given, how quality assurance problems may be detected in data sets using QUEEN quality indicators.
Based on existing building blocks for ML life-cycle and safety assurance of ML models, we specified an MLOps framework for ML-based systems in the railway domain, which enables continues development, verification, and safety assessment of driverless trains.
§ SAFE MLOPS PROCESS
In this Section, we outline the safe MLOps process specified in the safe.trAIn project, which integrates system engineering of a train with the ML lifecycle and safety assurance activities to continuously develop and assess a driverless train in terms of safety.
The safe MLOps process allows continuous engineering and assurance of ML-based systems. Therefore, the individual stages of the process need to be automated and a continuous delivery pipeline from the development of the ML-based system and its components (Dev) to the safety assessment (safe) and afterwards to the operation of the ML-based system (Ops) must be built.
In the "Dev" phase, the engineering of a train is extended with a process to develop and verify ML models which implements functions of the driverless train, such as the obstacle detection. Thereby, existing MLOps approaches are augmented with additional process steps to asses the data quality and assure the quality and performance of the ML model in terms of safety, reliability, transparency, and robustness. Thereby, evidences (quantitative and qualitative) for the safety case are created in each of the stages of the data & ML lifecycle and incorporated into the overall safety argumentation of the train.
In order to create the safety case in an iterative manner as part of the safe MLOps process, a model-based assurance case based on techniques such as the Goal Structuring Notation (GSN) <cit.> is used to represent the safety argumentation and also the evidences created during the entire development and assurance process.
Our safe MLOps approach aims to adapt the ML model frequently and to re-assess the system which incorporates this ML model in a timely manner. Hence, the ML-based can be adapted continuously to changing operating conditions in an iterative way.
For the verification & validation of the ML-based system, the ML model is integrated into the overall system and validated using operational test scenarios. In our approach, the ML-based system is tested in different environments. At first, the performance of the individual ML model is tested in isolation (similar to unit tests for non-ML software) using a set of test data during the ML lifecycle in the "Model Verification" stage.
After the integration of ML models and non-ML software, the resulting system is tested in software-in-the-loop (SiL) and hardware-in-the-loop (HiL) environments. The SiL tests are performed in the so-called "Virtual Test Field" stage in our safe MLOps process. After the tests in SiL and HiL environments, ML-based functions of the train are validated during field operation.
After a successful independent "Safety Assessment" based on the safety case with all evidences collected during verification & validation, the ML-based system (train) is released into operation (the "Ops" phase).
In order to close the loop in the safe MLOps process, relevant parameters of the system are monitored during operation. Thus, new operational conditions must be discovered (e.g., out-of-distribution detection or data/concept drift discovery) and respective data must be collected during the operation of the train. Moreover, in case the system discovers novel situations (not taken into account during the development), the system must handle such situations (e.g., by switching into a safe state) and provide adequate feedback to the next iteration of the development and assurance process with the aim to continuously release an updated and improved version of the ML model.
As depicted in Fig. <ref>, the process integrates three parts: (1) the system engineering lifecycle (aligned with the development process of EN 50126-1), (2) the data & ML lifecycle (aligned with with previous work as outlined in Section 2), and (3) the safety assurance lifecycle (also aligned with the EN 50126-1).
§.§ System Engineering
The system engineering process is based on the system lifecycle defined by the EN 50126-1 standard and Model-based System Engineering techniques. It is extended for the engineering of an ML-based system and integrated with the safety engineering lifecycle. The process consists of the following stages:
* Domain Analysis: Apart from the "normal" use case description and requirement elicitation process, also the Operational Design Domain (ODD) of the ML-based system is specified. The results of this stage are used as input for the "Hazard & Risk Analysis (HARA)" stage, which is part of the safety engineering lifecycle.
Challenge: How to specify the ODD in a (semi-formal) way to automatically derive test scenarios for the verification & validation?
* System Specification: Aim of this stage is the specification of the system architecture of the driverless train in form of s SysML model. Based on the system architecture a "Failure & Deficiency Analysis" is conducted.
* Functional & Technical Architecture Description: In this stage the system architecture is refined into a detailed SW/HW architecture model (in form of a SysML model). The refined architecture is the basis for a refinement of the "Failure & Deficiency Analysis".
* SW (non-ML) Design & Implementation: For the design and implementation of non-ML SW, the well established practices of the EN 50657 as well as DevOps principles are applied.
* ML/SW Integration: In this stage, the non-ML SW and the ML model(s) are integrated to create the System-Under-Test (SUT) which is used in the following verification & validation stages.
Challenge: How to extend existing continuous integration concepts for SW to incorporate also ML models?
* Virtual Test Field: Perform automated integration/system tests for the ML-based system in a SiL environment using the previously built SUT. The evaluation results provide evidences to the "Safety Case Development".
Challenge: Automate Software-in-the-Loop tests, incl. Scenario-based testing, data-driven ODD coverage, and evaluate safety metrics in a simulation environment with both real and synthetic test data.
* HW/SW Integration: Deployment of the non-ML SW and the ML model(s) on the target HW and incorporation of physical sensors to perform automated integration/system tests in a Hardware-in-the-Loop environment. The results of these tests provide evidences to the "Safety Case Development".
Challenge: Automation of Hardware-in-the-Loop tests, similar to Software-in-the-Loop tests in the virtual test field.
* Field Test: Finally, the ML-based system (train) is validated in specific test scenarios during operation on real test tracks. Again, the validation results provide evidences to the "Safety Case Development".
Challenge: Field testing must be performed in two stages: First, in a so-classed shadow mode with the aim to collect data for the validation of the system under failure conditions/scenarios. After successful validation in the shadow mode, system validation tests are performed in an operative mode, in which also the actuators of the rolling stock are controlled by the new SW or ML model.
* System Operation: After successful independent safety assessments, the train is operating in the defined ODD. Feedback from the operation is provided by collection data, see "Runtime Monitoring & Data Collection" stage.
Please note, the 9 stages of the system engineering process cover the 12 stages defined in the EN 50126-1 standard. Only the stages "Manufacture" and "Decommissioning" are not represented in our safe MLOps process, since these stages do not need to be adjusted for an ML-based system in the railway domain. Moreover, some of the stages of the system engineering lifecycle cannot be automated such as the definition of a system architecture.
§.§ Data & ML Lifecycle
In addition to the design and implementation of (non-ML) SW, there is a dedicated lifecycle for the ML model including the data for training and verification. The ML lifecycle uses the ODD specification, the technical system architecture, and the (safety) requirements as input. The result of the ML lifecycle is a trained and verified ML model which is input to the ML/SW Integration stage of the system engineering lifecycle. The ML lifecycle consists of the following 4 stages:
* Data Preparation: In this stage, the quality of training and test data used for the training and verification of the model is ensured. Therefore, corner cases and adversarial examples must be collected, training data must be selected, and synthetic data need to be generated. All the data used must be checked w.r.t. labeling inconsistencies. To analyse the quality of the data, quality indicators based on QUEEN <cit.> are applied.
Challenge: How to automatically analyze the ODD in terms of data distribution and diversity?
* ML Model Design: The ML model is defined, e.g. the number of layers of a DNN is specified. The design of the ML model influence the "Safety Concept", since different types of ML techniques require different safety argumentation strategies.
* ML Model Training: In this stage, the ML model is trained using the data from the "Data Preparation" phase.
* ML Model Verification: The performance of the ML model is verified in this stage. Therefore, different KPIs must be evaluated, such as uncertainty, robustness, interpretability, etc. These KPIs are used in the "Safety Case Development" as evidences to support the safety argumentation. In case the defined KPIs/requirements of the ML model are not met, you can go back to any of the ML model lifecycle stages and retrain the model with new data or modify the ML model design.
Challenge: What are the right (quantitative) metrics to evaluate the ML model performance and support the safety argumentation as evidences?
§.§ Safety Engineering
Both the system engineering and the ML lifecycle provide input for the safety engineering lifecycle, which consists of the following 6 stages:
* Hazard & Risk Analysis (HARA): The HARA tries to identify potential hazards that can be caused by the ML-based system. This stage, equals the Risk analysis and evaluation phase of the EN 50126-1 standard. As a result of this step, safety goals are defined as top-level safety requirements, which are an extension of the system requirements.
Challenge: Is it possible to automate both the identification of hazards and the assessment of the risk associated to hazards?
* Failure & Deficiency Analysis: In order to refine the safety requirements and create a safety architecture for a driverless train, the available system specification (system model) is used as input to safety analyses at different abstraction levels in order to identify potential causes of the identified hazards. This is done in parallel with the respective system engineering activities. Since we deal with an ML-based system, we need to incorporate Safety Of The Intended Functionality (SOTIF) aspects according to ISO 21448 <cit.> in the safety analyses. Component Fault & Deficiency Trees (CFDTs) <cit.> allow the analysis of both functional safety and SOTIF aspects of the system. Moreover, CFDTs is a Model-based Safety Assurance (MBSA) methodology, which seamlessly integrates with the MBSE approach used for system engineering. Hence, consistency between the system model (in SysML) and the safety analyses (in form of CFDTs) can be ensured despite the iterative development approach.
Challenge: Automate the safety analysis as much as possible using MBSA. Although a full automation without safety experts in the loop is hardly possible.
* Safety Concept Design: The safety concept is defined as the specification of the safety requirements, their allocation to system elements, and their interactions necessary to achieve safety goals. Based on the complete and detailed system design (including the design of the ML model) and the outcome of the "Failure & Deficiency Analysis", a concept is created which represents the argumentation that the ML-based train is sufficiently safe. Therefore, an assurance case in form of a Goal Structuring Notation (GSN) <cit.> tree is specified to represent the safety argumentation. It provides the possibility to integrate evidences from the verification & validation activities in the argumentation tree.
Challenge: Since there is not yet a standard for safety assessment of ML-based systems in the railway domain, the requirements a safety argumentation must fulfill are not clear. The safe.trAIn project tries to fill this gap.
* Safety Case Development: In this stage, a sound safety case is created which is the basis for the independent safety assessment of the system's safety.
Challenge: How to automatically adapt and extend the safety case when the ODD or the available data changes?
* Safety Assessment: The safety case of the driverless train is subject to independent reviews and safety assessments.
Challenge: How to deliver necessary assets and documentation for the assessment at the end of each sprint/iteration? And how enable the reuse of previous results in an assessment of a changed systems?
* Runtime Monitoring & Data Collection: During operation the ML-based system must be monitored throughout the entire product lifetime to detect potentially novel situations (which were unknown during the development) or potential unsafe behavior.
Challenge: Which are the relevant safety metrics to be continuously monitored during operation in order to identify potential unsafe behavior or novel scenarios?
§ SUMMARY AND OUTLOOK
In this paper, we outline a process for the continuous development and safety assurance of ML-based systems in the railway domain. This so-called safe MLOps process integrates system engineering, safety assurance, and the Machine Learning (ML) life-cycle into a comprehensive and continuous process.
While the system engineering and the safety assurance related stages of the process are aligned with the development process of the EN 50126-1 safety standard for railway systems, the ML & data lifecycle is aligned with previous work in this field (e.g., the AMLAS process <cit.>). Apart from outlining the safe MLOps process and describing the interaction of the different stages of the systems engineering, the ML lifecycle, and the safety assurance, we describe relevant challenges to automate the stages of the safe MLOps process.
As a next step, the described safe MLOps process is realized in the safe.trAIn project in form of a CI/CD pipeline with the appropriate support by tooling to achieve the required degree of automation. The resulting pipeline will be evaluated in an industrial use case. In order to to establish traceability and audibility a consistency layer is required which stores all the artifacts created in the different stages of the safe MLOps process (e.g. parameter of the ML model, training/test data, evidences, etc.).
Moreover, solution strategies for the challenges to automate the different stages of the safe MLOps process must be developed in the safe.trAIn project. One of these challenges is keeping the Verification & Validation test chain in sync with the demands of the safety assurance case.
We will also discuss our safe MLOps process with the relevant standardization bodies, such as CEN-CENELEC JTC 21, with the goal to specify a standardized process for the development and safety assurance of ML-based systems in the railway domain.
Furthermore, we will investigate the applicability of the safe MLOps framework to different industrial application domains such as industrial automation, healthcare, etc.
§ ACKNOWLEDGMENT
This research has received funding from the Federal Ministry for Economic Affairs and Climate Action (BMWK) under grant agreements 19I21039A.
|
http://arxiv.org/abs/2307.10186v1
|
20230705085227
|
Multi-Scale U-Shape MLP for Hyperspectral Image Classification
|
[
"Moule Lin",
"Weipeng Jing",
"Donglin Di",
"Guangsheng Chen",
"Houbing Song"
] |
eess.IV
|
[
"eess.IV",
"cs.LG"
] |
roman
e.g. E.g
i.e. I.e.
c.f. C.f.
etc vs.
a.k.a.
w.r.t. d.o.f.
et al.
Multi-Scale U-Shape MLP for Hyperspectral Image Classification
Moule Lin^https://orcid.org/0000-0001-6227-2392
< g r a p h i c s >,The work described in this paper is supported by National Natural Science Foundation of China (31770768), Fundamental Research Funds for the Central Universities(2572017PZ04), Heilongjiang Province Applied Technology Research and Development Program Major Project(GA18B301,GA20A301) and China State Forestry Administration Forestry Industry Public Welfare Project (201504307) (Corresponding author: Weipeng Jing.)
Weipeng Jing^https://orcid.org/0000-0001-7933-6946
< g r a p h i c s >, Member, IEEE,M. Lin, W. Jing and G. Chen are with the College of Information and Computer Engineering, Northeast Forestry University, Harbin 150040, China. (e-mail: [email protected], [email protected], [email protected])
Donglin Di^https://orcid.org/0000-0002-2270-3378
< g r a p h i c s >,D. Di is with the Baidu Co., Ltd, Beijing 100085, China (e-mail: [email protected]).
Guangsheng Chen, Member, IEEE
and Houbing Song^https://orcid.org/0000-0003-2631-9223
< g r a p h i c s >, Senior Member, IEEEH. Song is with the Department of Electrical, Computer, Software, and Systems Engineering, Embry-Riddle Aeronautical University, Daytona Beach, FL 32114, USA (e-mail: [email protected])
==============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Hyperspectral images have significant applications in various domains, since they register numerous semantic and spatial information in the spectral band with spatial variability of spectral signatures.
Two critical challenges in identifying pixels of the hyperspectral image are respectively representing the correlated information among the local and global, as well as the abundant parameters of the model.
To tackle this challenge, we propose a Multi-Scale U-shape Multi-Layer Perceptron () a model consisting of the designed MSC (Multi-Scale Channel) block and the UMLP (U-shape Multi-Layer Perceptron) structure.
MSC transforms the channel dimension and mixes spectral band feature to embed the deep-level representation adequately.
UMLP is designed by the encoder-decoder structure with multi-layer perceptron layers, which is capable of compressing the large-scale parameters.
Extensive experiments are conducted to demonstrate our model can outperform state-of-the-art methods across-the-board on three wide-adopted public datasets, namely Pavia University, Houston 2013 and Houston 2018.
Hyperspectral Image (HSI), Multi-Scale Shape, Multi-Layer Perceptron, Compression Model
§ INTRODUCTION
[lines=2]HYPERSPECTRAL Image (HSI) contains complex multiple structures and composed by numerous bands with location and distribution information <cit.>.
Therefore, it has a significant value for monitoring the Earth’s surface, and the research of hyperspectral remote sensing images has broad applications in society and other domains.
An incomplete list includes environmental sciences <cit.>, agronomy <cit.>, geography <cit.>, astronomy <cit.>, mineralogy <cit.>, and so on.
The hyperspectral remote sensing images research mainly focuses on analyzing the spectral band's complex intra-class and inter-class relationships.
The scientific research of hyperspectral has a long history, from artificial geometry to deep learning, all of which have extensively promoted the progress of hard science and society.
A.Huertas <cit.> was the first one to apply hyperspectral image in urban analysis.
It adopted geometric topology theory to extract the urban building boundary. JB Adams <cit.> analyzes the spectral band through mathematical theory and combine other factors to analyze hyperspectral data.
In general, early hyperspectral image research primarily employed theoretical knowledge of various subjects, such as geometry, to analyze the shallow information.
The state-of-the-art methods always allude to deep learning represented by convolutional neural networks (CNNs).
Moreover, it also got a colossal achievement and outstanding performance in semantic segmentation, classification, target detection, and so forth.
Luo proposed semisupervised sparse manifold discriminative analysis ((S^3MDA)<cit.> to overcome the challenge of how to graph embedding (GE) selecting a proper neighborhood size for graph construction. Furthermore, they proposed a more advanced method called enhanced hybrid-graph discriminant learning (EHGDL) structure that adopted intraclass hypergraph and an interclass hypergraph to analyze the complex multiple relationships of an HSI. <cit.>
The role of CNNs in deep learning is undeniably likely a de-facto standard.
Its parameter, however, increases exponentially with the layers of convolution, and its size increases with computational capacity, consistently exceeding 20M to have a qualified express ability.
Moreover, computational consumption is a bottleneck for industrial applications due to the long-lasting operation of multiplication and addition, which cannot satisfy the real-time requirement of the industry.
I. Tolstikhin proves that CNNs are not necessary for deep learning. The present MLP-Mixer <cit.> frame adopted two types of MLPs to mix per-location features and spatial information, respectively. It is a meaningful research topic.
By sacrificing the microscopic accuracy, the model speed has improved remarkably, and the model size is also compressed.
Despite its simplicity, it has outstanding performance in various domains and disciplines. And many kinds of research are based on MLP-Mixer in the remote sensing domain to excavate more meaningful analysis.
M. Lin <cit.> employed GCN to transform the features from a chaotic state into a high cohesion state with reducing the data’s redundant information.
And the noise in the hyperspectral images is also a challenging problem.
HyMiNoR <cit.> is also an effective denoising method using a novel sparse noise frame.
In addition, U-Net <cit.> is a classic encoder-decoder structure in which the encoder embeds spatial and semantic information and the decoder mix that information with position features.
Although existing methods can perform well in hyperspectral classification, their model's computational consumption is terrible, and its running time is always long-winded.
The convolution operation is the source of all problems. It brings outstanding performance in various fields, but it also brings a complicated calculation consumption.
MLP-Mixer is epoch-making research that only contains MLP operation by stacking layers to mix all features, such as spatial information.
However, its expressive ability is weakened due to its simple model structure, mainly by ignoring the semantic information between the neighbouring structures.
To address the problems mentioned above, We propose (Multi-scale U-shape's Multi-Layer Perceptron) that the model's parameter scale is only 0.817M, and the running consumption fully satisfies the demand of the industry. The main contributions of this letter are summarized as follows:
1)
According to the idea of our model. is constituted of MSC (Multi-scale channel) block and UMLP (U-shape's Multi-Layer Perceptron) block.
And the main contributions of our work in this letter is that its parameter scale is only 0.817M. What's more, to enhance each pixels feature expression, we proposed the MSC block that transforms the channel dimension to 2^n (n is an integer) to unify the channel data of different datasets. The 1×1 convolution operation can easily excavate deeper, was adopted to expands the representation of various features of the pixel to obtain the distribution of multiple dimensions. stacking MSC-2 to get deeper receptive fields of module employed for Gen-C channels is a tack strategy for features extracted of entirety model, which solves the bottleneck of crucial information loss without causing parameter explosion.
2)
In addition, solves the drawback of MLP-Mixer's poor expressive ability according to an encoder-decoder structure to effectively increase the model's embedding ability for deep semantic information. Multi-Layer perceptron is a simple, but efficient when variant features contain in the previous phase to next, which does not involve complex multiplication, and its size is only 0.817M. Unlikely normal U-Net, each step existed one process only, this component designate three different steps, including one rotation and two MLP operations. The skip connection combines deeply semantic information and spatial information. The U-block will improve the discriminant performance of different features types and for each class. Compare the state-of-the-art techniques as shown in Fig. <ref>.
§ PROPOSE
We transformed the segmentation into a classification task in the proposed method, working on pixel-wise classification instead of patch segmentation.
It outputs each pixel of hyperspectral will form an image-like patch. Each row represents a different feature expression of each pixel generated by convolution, hereinafter reference as: Pixel-C.
Each column represents the generalization of the original pixel channel value, hereinafter reference as: Gen-C.
MSC mainly contains two parts: one with MSC applied to extract the features of Pixel-C, similar with pooling operation, and one with MSC employed to mix Gen-C information.
For hyperspectral image, its size is Ĩ^𝐇×𝐖×𝐂. C is the number of spectral bands, H and W are the image height and width respectively.
The random method shuffled all the pixels and excluded background, adopted to sample each pixel randomly. It is described as follows:
𝐈^𝐁× 1×𝐂=𝐒_𝐤( 𝐈̃^𝐁×𝐇×𝐖×𝐂)
S is a random function, and B represents the batch size. k denotes the sampling range, which determines whether it is the training or test set. On the whole, the input data is a pixel point rather than a patch.
In the MSC block, we employed two MLP layers to extract the deeper semantic information to express the pixel features about its kernel attributes, which can be written as follows:
𝐗_*,i=Γ( 𝐖_2( σ( 𝐖_1τ( 𝐈_*,i) ) ) ),𝑓𝑜𝑟i=1…𝑐
Here c is Pixel-C and τ is LayerNorm. 𝐖_1 and 𝐖_2 are weight of tow MLP layers respectively. σ represents a nonlinearity (GELU). Γ denotes dropout function.
The Pixel-C dimension only embeds one feature of pixels. We employ, therefore, convolution operation with 1×1 size of the kernel, which extracts more semantic features in which embedded various information to generalize each pixels channel and named Gen-C. It describes as the following formula:
𝐔_*,j,*=τ( Conv( 𝐗_*,j,*) ),𝑓𝑜𝑟j=1…𝑐𝑙𝑠
The Gen-C of pixels features 𝐔_*,j,* relies on the number of pixel classes 𝑐𝑙𝑠. The size of the convolution kernel is 1 × 1.
We adopted a convolutional layer with a small parameter to generalize each pixel into a deeper semantic level.
For UMLP block, it is composed of MixerBlock, U-shape interpret and skip connection modules, respectively.
Stacking the MSC layer to get a higher receptive field, the input (𝐔_*,j,*) has global feature information on both the Pixel-C dimension and Gen-C.
The MixerBlock module mixes semantic information in two directions through two MLP layers and then extracts Pixel-C dimension features through one MLP (reduced dimension), as shown following:
𝐗^1_*,j,i=τ(Γ𝐖_*,j,c^𝐺𝑒𝑛-CΓσ(𝐖̃_*,c,iU_*,c,i))^Pixel-C_∑⊆R^layer
we focus on high cohesion and low coupling of channel features that mix the Pixel-C with 𝐖_*,j,c and Gen-C with 𝐖_*,c,i.
For U-shape interpret module, the value of i is halved with one layer of the encoder, and we employed a three-layer to extract more features for each pixel. Then we also have decoder operation with skip connection to mix spatial position with encoder features.
{𝐗^e_*,c,k =τ (𝐖_*,c,k(𝐗^1_*,c,i)) ),k=i/2 or i*2
𝐗^d_*,c,* =τ (𝐗^e_*,c,*+𝐗^d̃_*,c,*), d̃=t
.
where 𝐖_*,c,k scale the Pixel-C dimension to embed deeper semantic information. 𝐗^d_*,c,* represents the result of skip connection with location information and it has same dimension with 𝐗^d̃_*,c,*.
In the UMLP block, we also employed stacking UMLP layers to obtain a deeper receptive field.
§ EXPERIMENTS
§.§ Data description and details
In this experiment, chose three widely used HSI datasets to evaluate the proposed methods' performance: Pavia University (PaviaU), Houston 2013 and Houston 2018.
The PaviaU is a hyperspectral image obtained by scanning the Italian city of Pavia in 2003.
Houston 2013 contains 144 spectral bands and covers a spatial region of 349 × 1905 pixels with a spatial resolution of 2.5m per pixel.
Houston 2018 covers 380-1050 nm spectral range with 48 bands at a 1-m GSD. For Houston 2018, the pixel size of the ground truth is 0.5.
Hence, we used the nearest neighbour algorithm to convert the pixel size to 1, corresponding to the training dataset.
We adopted random sampling to reasonably split the training dataset, validation dataset and test dataset.
All experiments are conducted on a Tesla V100 with 32GB RAM.
The learning rate is 0.0002, the weight decay rate is 8e-7, and the learning rate decay function is LambdaLR.
§.§ Result
In this letter, we employed five metrics, namely OA, AA, Kappa, std and p-value, to evaluate the model's performance on the three well-known datasets.
The Table <ref> and Table <ref> show that performance is superior for all comparison methods.
Moreover, the visualization of the classification maps is shown in Fig. <ref>.
From Table <ref>, the overall accuracy of the and MLP-Mixer is close, only a difference of 0.7%, 2% and 0.08% in Houston 2018, Houston 2013 and paviaU, respectively. Furthermore, also reached the level of significance.
From Table <ref>, the average accuracy is always lower than the overall accuracy due to the number of sample pixels. outperforms state-of-the-art methods across-the-board, improving 6.61% on CAGU, 5.47% on MLP-Mixer, 14.17% on OTVCA, 14.64% on HT-CNN-Attention in Houston 2018 dataset.
Analyzing these results, obtains a basic baseline (MSC module) through the operation of the rotating hybrid perceptron. Compared with those existing model with a certain bottleneck,such as HT-CNN-Attention, it perfectly extracts the hybrid representation of various features from different angles. Meanwhile, the U-shape module interprets the features from the MSC phase, and digs out the inherent logical relationship between the pixel channels. That is why is superior compared with those existing model with a certain bottleneck. We conducted ablation experiments on the Houston 2018 data set, as shown in Table <ref>. The results prove that our model is robust.
In order to ensure the robustness of the model, we calculated the std and p-value. The standard deviation of each dataset is less than 0.02 while reaching the significance level.
§ CONCLUSION
We propose in this letter, efficiency and effectiveness structure that the parameter scale is only 0.817M and outperforms state-of-the-art methods across-the-board.
Convolutional neural networks and attention-based structures can achieve outstanding performance, but neither of them is necessary.
Not surprising, we demonstrate that is superior to state-of-the-art methods
not only effectiveness but efficiency, which perfectly illustrates the extreme superiority in our approach.
§ ACKNOWLEDGEMENT
We would like to thank the National Center for Airborne Laser Mapping and the Hyperspectral Image Analysis Laboratory at the University of Houston, Prof. Paolo Gamba from the Telecommunications and Remote Sensing Laboratory, Pavia University for acquiring and providing the data used in this study and thank for the IEEE GRSS Image Analysis and Data Fusion Technical Committee.
IEEEtran
|
http://arxiv.org/abs/2307.02754v1
|
20230706032611
|
Intent-driven Intelligent Control and Orchestration in O-RAN Via Hierarchical Reinforcement Learning
|
[
"Md Arafat Habib",
"Hao Zhou",
"Pedro Enrique Iturria-Rivera",
"Medhat Elsayed",
"Majid Bavand",
"Raimundas Gaigalas",
"Yigit Ozcan",
"Melike Erol-Kantarci"
] |
cs.NI
|
[
"cs.NI"
] |
Intent-driven Intelligent Control and Orchestration in O-RAN Via Hierarchical Reinforcement Learning
Md Arafat Habib[1], Hao Zhou[1], Pedro Enrique Iturria-Rivera[1], Medhat Elsayed[2], Majid Bavand[2],
Raimundas Gaigalas[2], Yigit Ozcan[2] and Melike Erol-Kantarci[1], Senior Member, IEEE
[1]School of Electrical Engineering and Computer Science, University of Ottawa, Ottawa, Canada [2]Ericsson Inc., Ottawa, Canada
Emails:{mhabi050, hzhou098, pitur008, melike.erolkantarci}@uottawa.ca,
{medhat.elsayed, majid.bavand, raimundas.gaigalas, yigit.ozcan}@ericsson.com
August 1, 2023
===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
fancy
Accepted by the 20th IEEE International Conference on
Mobile Ad-Hoc and Smart Systems
(MASS 2023)2023 IEEE
rApps and xApps need to be controlled and orchestrated well in the open radio access network (O-RAN) so that they can deliver a guaranteed network performance in a complex multi-vendor environment.
This paper proposes a novel intent-driven intelligent control and orchestration scheme based on hierarchical reinforcement learning (HRL). The proposed scheme can orchestrate multiple rApps or xApps according to the operator's intent of optimizing certain key performance indicators (KPIs), such as throughput, energy efficiency, and latency. Specifically, we propose a bi-level architecture with a meta-controller and a controller. The meta-controller provides the target performance in terms of KPIs, while the controller performs xApp orchestration at the lower level. Our simulation results show that the proposed HRL-based intent-driven xApp orchestration mechanism achieves 7.5% and 21.4% increase in average system throughput with respect to two baselines, i.e. a single xApp baseline and a non-machine learning-based algorithm, respectively. Similarly, 17.3% and 37.9% increase in energy efficiency is observed in comparison to the same baselines.
O-RAN, rApps, xApp, hierarchical reinforcement learning, orchestration
§ INTRODUCTION
Open radio access network (O-RAN) facilitates openness and intelligence to support diverse traffic types and their requirements in 5G and beyond networks<cit.> as well as, multi-vendor RAN deployments. In a multi-vendor environment, rApps and xApps can be hosted in a non-real-time RAN intelligent controller (non-RT-RIC) and near-real-time RAN intelligent controller (near-RT-RIC). In the literature, xApps and rApps have been studied for resource and power allocation, beamforming and management, cell sleeping, traffic steering and so on<cit.>. Advanced reinforcement learning (RL) algorithms can be used to develop intelligent network functions in O-RAN. However, a multi-rApp or a multi-xApp scenario with a variety of AI-enabled Apps will require intelligent control and orchestration among the Apps to avoid performance degradation.
Note that, we focus on xApps as a case study but our work generalizes to rApps as well. To elevate autonomy in O-RAN via xApp orchestration, intent-driven network optimization goals can play a pivotal role. The intent is defined as an optimization goal that is a high-level command given by the operator usually in plain language and it determines a key performance indicator (KPI) that the network should meet, such as “increase throughput by 10%” or “increase energy efficiency by 5%” <cit.>. To better support autonomous orchestration of the xApps in a multi-vendor environment, emphasis on operators' intents is crucial <cit.>. Intents aid in achieving agile, flexible, and simplified configuration of the wireless networks with minimum possible intervention. Furthermore, intelligent intent-driven management has the ability to constantly acquire knowledge and adjust to changing network conditions by utilizing extensive real-time network data. The inclusion of intent-driven goals for intelligent xApp control and orchestration is a promising yet highly complex task, since there are multiple vendors involved with different network functions and intents may trigger conflicting optimization goals in sub-systems. There are a few works on conflict mitigation or xApp cohabitation in O-RAN. For instance, Han et al. propose a conflict mitigation scheme among multiple xApps using team learning <cit.>, and Polese et al. propose a machine learning (ML)-based pipeline for the cohabitation of multiple xApps in an O-RAN environment <cit.>. The work outlined in <cit.> introduces a method for achieving automation throughout the entire life cycle of xApps, beginning with the utilization scenario, requirements, design, verification, and ultimately, the deployment within networks. However, the operator intent is not involved in these works.
To this end, we propose a hierarchical reinforcement learning (HRL) method for intent-driven xApp orchestration. Different from the previous works, the proposed scheme has a bi-level architecture, where we can pass the intents to the top-level hierarchy, and process it as optimization goals for the lower-level controller to control and orchestrate xApps. Orchestration can avoid xApp conflicts and improve performance by combining xApps with similar performance objectives. The proposed method is compared with two baselines: non-machine learning (non-ML) solution and a single xApp scenario. Our simulation results show that the proposed HRL-based intent-driven xApp orchestration mechanism achieves 7.5% and 21.4% increase in average system throughput along with 17.3% and 37.9% increase in energy efficiency, compared to the single xApp and non-ML baselines, respectively.
The rest of the paper is organized as follows: Section <ref> discusses the related works, followed by Section <ref> which presents the system model elaborately. The proposed HRL-based xApp orchestration in O-RAN is covered in Section <ref>. Performance analysis and comparison of the proposed method along with the baselines are presented in Section <ref>. Lastly, we present our conclusions in Section <ref>.
§ RELATED WORK
There are a few works that investigate ML-based xApps for RAN optimization and control. Polese et al. propose an ML pipeline for multiple xApps in an O-RAN environment <cit.>. Han et al. propose a conflict mitigation scheme among deep reinforcement learning (DRL)-based power allocation and resource allocation xApps <cit.>. Polese et al. propose an Orchest-RAN scheme, in which network operators can specify high-level control objectives in non-RT-RIC to sort out the optimal set of data-driven algorithms to fulfill the provided intent <cit.>. While the work presented in <cit.> focuses on selecting the appropriate machine learning models and their execution locations for given inputs from the operator, it does not put emphasis on the network operator's goals as optimization objectives to select and orchestrate xApps.
An intent-driven orchestration of cognitive autonomous networks of RAN management is presented in <cit.>, where the authors propose a generic design of intent-based management for controlling RAN parameters and KPIs. Zhang et al. propose an intent conflict resolution scheme to realize conflict avoidance in machine learning-based xApps <cit.>. A graph-based solution is proposed in <cit.> to determine the specific network function required to fulfill an intent.
Compared with existing literature, the main contribution of this work is that we propose an HRL scheme for intent-driven orchestration of xApps. The HRL scheme can well fit the inherent O-RAN hierarchy with non-RT-RIC and near-RT-RIC, and intent-based orchestration enables higher flexibility for network control and management. The intents from the human level operator are provided as goals for the system to achieve, which leads to the orchestration of xApps to achieve the provided goal.
§ SYSTEM MODEL
§.§ System Model
We consider an O-RAN-based downlink orthogonal frequency division multiplexing cellular system having B BSs serving U users simultaneously, and multiple small cells in the system are within the range of a macro cell. There are K classes of traffic in the system and users are connected with multiple RATs via dual connectivity. There are Q classes of RATs (q_1,q_2,...,q_n), where q represents a certain access technology (LTE, 5G, etc.). The wireless system model considered in this work is presented in Fig. <ref>. RIC platforms in the figure (non and near-RT-RIC) can host rApps and xApps which are control and optimization applications operating at different time scales.
We design three xApps, namely traffic steering, cell sleeping, and beam forming xApps. In each xApp, we apply deep reinforcement learning for optimization within this xApp, which will be introduced in the following.
§.§.§ Traffic Steering xApp
The traffic steering xApp aims to achieve a simultaneous balance of QoS requirements for various traffic classes by introducing a traffic steering scheme based on Deep Q-Network (DQN) <cit.>. We design the reward and state functions to ensure satisfactory performance, focusing on two essential KPIs: network delay and average system throughput. Traffic can be steered to a certain BS based on load experienced, link quality, and traffic type. The details of this xApp can be found in <cit.>.
§.§.§ Cell Sleeping xApp
The cell sleeping xApp is designed to reduce power consumption in the system by turning off idle or less busy BSs. The xApp can perform cell sleeping based on traffic load ratios and queue length of each BS. The energy consumption model for the BS is:
P_in={[ P_0+δ_p P_out, 0 < P_out≤ P_max,; P_sleep , P_out=0, ].
where P_0 is the fixed power consumption, δ_p is the slope of load-dependent power consumption, P_out is the transmission power, P_max is the maximum transmission power, and P_sleep is the constant power consumption in sleep mode <cit.>.
The goal of the cell sleeping xApp is to maximize energy efficiency as much as possible without overloading the active BSs. The optimization goal is formulated as follows:
max_P_b ∑_u∈U_o∑_b∈ B T_u,b/P_b-θ b_u,
s.t. (<ref>),
where U_o is the set of the user equipments (UEs) connected to a certain BS, T represents the throughput, θ is the penalty factor to reduce overloading, and b_u is the number of the BSs overloaded. Turning off the BSs can greatly decrease power consumption. It reduces the number of BSs active that are serving the live network traffic. This poses a risk of overloading the active BSs. Therefore, the penalty factor related to the number of BSs has been introduced to avoid excessive overloading.
To address the formulated problem, the following MDP is formulated:
* State: The set of the state consists of: S={q_L, L_R}. L_R represents the traffic load ratio of a BS, b. The second element of the state space is the queue length of the BSs representing the load level.
* Action: Turning the BSs on and off are put into the action set for the DQN implementation. A={ON, OFF}.
* Reward: The reward function is the same as eq. (<ref>).
§.§.§ Beamforming xApp
The third xApp is the beamforming xApp.
We deploy band-switching BSs from 3.5 GHz to mmWave frequencies <cit.>. This allows us to support high throughput traffic like enhanced mobile broadband (eMBB) via accurate intelligent beamforming. This xApp can control power based on the location of the UE and it uses minimum transmission power needed which is energy efficient. The xApp employs analog beamforming, and a multi-antenna setup is adopted where each BS deploys a uniform linear array (ULA) of M antennas <cit.>. The beamforming weights of every beamforming vector are implemented using constant modulus phase shifters. We also assume that there is a beam steering-based codebook, F, from which every beamforming vector is selected <cit.>.
Every BS l has a transmit power P_TX,l∈ P, where P is the set of candidate transmit powers. We want to optimize two parameters: throughput and energy efficiency using this xApp. To obtain such a goal, the following optimization problem is addressed.
max∑_l∈{1,2,..,L}[c_1(T_k,b/T_QoS) + c_2(ε/ε_max)],
s.t. P_TX,l[t] ∈ P,
f_l[t]∈ F,
where T_k,b is the throughput achieved by the system, T_QoS is the defined throughput requirement for a traffic type k, ε represents the energy efficiency associated with the BS throughput and transmission power, ε_max is the maximum theoretical energy efficiency, and c_1 and c_2 are the weight factors.
To solve the formulated problem, the following MDP is defined.
* State: UE co-ordinates are used as set of states, S={C_UE1, C_UE2,...,C_UEN}.
* Action: The action set consists of two elements: A={α(χ_n),δ_n}. Here, χ is the steering angle, and α(χ_n) is the array steering vector in the direction χ_n of the n-th element in the codebook. δ_n accounts for the power level change.
* Reward: The reward function is the same as eq. (<ref>) as presented before.
§ PROPOSED HRL-BASED XAPP ORCHESTRATION SCHEME
RL problems can be formulated as MDPs where we have a set of states, actions, transition probability, and a reward function (S,A,T,R). The RL agent in HRL consists of two controllers: a meta-controller and a controller <cit.>. The MDP for HRL has an added element which is denoted as a set of goals (G). Depending on the current state, the meta-controller is responsible for generating high-level goals (G=g_1,g_2,...,g_n) for the controller. After that, these goals are transformed into high-level policies. The controller chooses low-level action a according to the high-level policies. This process from the controller yields an intrinsic reward (r_in). Finally, an extrinsic reward (r_ex) is given to the meta-controller from the environment and it will provide the controller with a new goal (g_'). This section will discuss the xApp orchestration scheme via HRL.
§.§ xApp Coordination Using HRL
The proposed O-RAN-based system architecture is presented in Fig <ref>. RIC platforms can host rApps and xApps which are applications operating at different time scales. Three xApps have been defined in previous sections. The rApp in the figure works as an input panel for the network operator, and it can convert these inputs as goals to be optimized. Also, it works as the meta-controller in the non-RT-RIC.
Let's assume, X is a set of xApps and Y is the subset of X having at least one element (xApp in our case), that can optimize the network performance based on the operator input. Let I be the set of candidate KPIs that a xApp can optimize and Z be the set of QoS requirements the system has to satisfy. Considering all these assumptions, the xApp orchestration problem that we want to address can be formulated as follows:
max∑_i∈ I∑_z∈ Z(P_i-ρξ_z),
s.t. ∀(X)∃(Z): V(O)=1,
where P is the intended performance metric the operator intends to improve. ρ is the penalty parameter for QoS requirement violation, and ξ_z is the number of UEs QoS requirements violated to. Lastly, V(O) is the proposition that “An xApp can improve a performance metric", which is either ‘0’ or ‘1’.
As presented in Fig. <ref>, the rApp in the system is directly connected to the user panel where the operator may provide input to the system. The operator input is provided as the percentage of the increase related to a certain KPI. For example, x% for throughput increase or y% for energy efficiency increase or any other intent stated in natural language. The rApp has a hierarchical deep Q-learning (h-DQN) framework <cit.>. The meta-controller (in non-RT-RIC) takes the increased amount of throughput or increased amount of energy efficiency as a goal, observes the state in the environment and provides both the goal and states to the controller in near-RT-RIC having a bundle of xApps. This type of data passing is done via the A1 interface by which both the non and near-RT-RIC are connected. The controller takes the action of choosing an xApp or a set of xApps based on the provided state and goal. Following, we define the MDP for the meta-controller and controller to address the xApp orchestration problem formulated in eq. (<ref>).
* State: The set of states consists of traffic flow types of different users in the network. UEs having similar traffic types are grouped together. S={T_voice,..,T_urllc,...,T_gaming,...,T_eMBB,..}. Elements in this set stand for five different traffic types in the system. Both meta-controller and controller share the same states.
* Action: xApp or combination of xApp selection is considered as actions to be performed by the controller which is defined as: {A_xApp1, A_xApp1,2,...., A_xAppN}.
* Intrinsic reward: The intrinsic reward function (r_in) for the controller is: r_in=P_i-ρξ_z which is similar to eq. (<ref>).
* Goal for the controller: Increased throughput or increased energy efficiency level that can satisfy operator intent is passed to the controller as goals. It is G={tp_1, tp_2,..., tp_n} for throughput increasing intents or G={ee_1, ee_2,..., ee_n} for energy efficiency increasing intents. Note that these goals can be generalized to other KPIs however for simplicity we target throughput and energy efficiency.
* Extrinsic reward: The meta-controller is responsible for the overall performance of the system. Therefore, we have set the extrinsic reward function for the meta-controller as the objective of the problem formulation presented in eq. (<ref>). The following equation is basically the summation of the intrinsic reward over τ steps.
r_ex,τ= 1/n∑_τ=1^n r_in,τ ∀(u)∈ U,∀(b)∈ B,
The whole process of xApp orchestration can be summarized as follows:
* Step 1: Operator's intent is provided as input regarding which performance metric is to be improved.
* Step 2: These performance targets are provided to the controller in near-RT-RIC by the meta controller rApp in the non-RT-RIC as goals to achieve.
* Step 3: The controller selects an xApp or a combination of xApps to reach the target performance as close as possible. The system learns based on the reward it gets for such kind of xApp selection.
* Step 4: Selected xApps with their own DRL-based functionalities optimize the performance of the network as a response to the intent of the operator.
§.§ Baseline Algorithms
This section includes two baselines. The first baseline is a simulation of the same network scenario based on the system model we have presented so far where there is no intelligent DRL-based xApp to optimize the network. We use non-ML algorithms. For comparing the throughput performance of the proposed HRL-based system, we use a threshold-based traffic steering scheme proposed in <cit.>. It uses a predefined threshold. The threshold is determined considering the load at each station, channel condition, and user service type. The mean of all these metrics is taken to obtain the threshold (T) values. Weighted summation of the same parameters is taken to form a variable (w). Then, the traffic is steered to another BS based on the w and T values. This baseline does not include cell sleeping, therefore BSs are always on. In our second baseline, we consider single xApp scenarios. For example, the proposed HRL-based xApp orchestration mechanism is compared with single xApp scenarios where only traffic steering xApp is in action, or the cell sleeping xApp is in action.
§ PERFORMANCE EVALUATION
§.§ Simulation setup
A MATLAB-based simulation environment has been developed having one eNB and four gNBs to serve as one macro-cell and four small cells. In total, we deploy 60 UEs with five different traffic types: voice, gaming, video, URLLC, and eMBB. Different types of traffic in the system have variant requirements in terms of different KPIs. QoS requirements of different traffic types have been defined based on our previous work <cit.>. eMBB and URLLC traffic types have been added additionally to test the system compatibility. For the eMBB traffic type, we consider packet size, T_QoS, and D_QoS to be 1500 bytes, 100 Mbps, and 15 ms, respectively <cit.>. Lastly, specifications related to delay and packet size for the URLLC traffic are set to 32 bytes and 2.5 ms.
A 5G NSA mode having different types of RAT (LTE and 5G) in the simulation environment work together. We deploy an architecture based on <cit.>. The carrier frequency for LTE is set to be 800 MHz. For 5G NR small cells, band-switching BSs are deployed at 3.5 GHz and 30 GHz. BS transmission power for LTE and 5G NR is set to be 38 dBm and 43 dBm, respectively <cit.>.
For the HRL implementation, the starting rate of learning is set to 0.95. In order to maintain stable learning performance, we reduce the learning rate periodically after a certain number of episodes. Additionally, the discount factor used is 0.3. The simulation is conducted 10 times using MATLAB, and the average outcomes are presented along with a 95% confidence interval.
§.§ Simulation results
Before conducting the performance evaluation of the proposed xApp orchestration scheme, first, we present how the intent-oriented HRL-based orchestration scheme works. Fig. <ref> shows that the operator intent of “increase throughput” leads to the selection of certain xApps. When there is a 5% throughput increase intent from the operator, after a few time slots, there is a sharp increase in throughput. This is because xApp1 (traffic steering xApp) has been invoked. When a 5% increase is again given as an input, a combination of xApp1 and xApp3 (intelligent beamforming xApp) is selected. When the operator provides an intent to decrease power consumption by 5%, we can see from Fig. <ref> that there is a sharp decrease in throughput. This is because xApp1 and xApp3 have been terminated at the 461-th time slot and xApp2 (cell sleeping xApp) has been invoked.
Fig. <ref> presents a similar graph to the previous one but this time it plots the energy efficiency in the time axis. When there is an intent from the operator to achieve “10% increase in energy efficiency”, we can see that there is an initiation of xApp2 at the 131-th time slot. This xApp performs cell sleeping and saves energy. For the next energy efficiency increase intent given by the operator, it can be seen that both xApp2 and xApp3 are working together. The proposed HRL-based algorithm has successfully orchestrated these two xApps for the desired performance gain. Fig. <ref> and <ref> basically show the utility of the proposed system. Not only it can induce operator intent as an optimization goal, but also it can orchestrate xApps to gain desired performance output by using the proper combination of xApps.
Fig. <ref> shows the performance comparison between the proposed HRL-based xApp orchestration scheme and the baseline scenarios in terms of average system throughput. Results are obtained under a constant load of 6 Mbps. The proposed orchestration scheme achieves a 21.4% increase and 7.5% increase in average system throughput compared to the non-ML algorithm and single xApp scenario (traffic steering xApp), respectively. It is because of the efficient orchestration mechanism that involves multiple xApps that trigger the optimal combination of xApps to reach better performance based on the operator intent.
Fig. <ref> shows the performance comparison between the proposed HRL-based xApp orchestration scheme and the baseline scenarios in terms of average energy efficiency. The proposed orchestration scheme obtains a 17.3% increase and 37.9% increase in average energy efficiency compared to the single xApp and non-ML scenario (cell sleeping xApp), respectively. Similar to the former, it is because of the HRL-based orchestration mechanism that incorporates multiple xApps to achieve better performance based on the user intent. Also, note that we use traffic steering in the former figure and cell sleeping in this evaluation because they are specifically optimizing throughput and energy respectively.
§ CONCLUSIONS
In this paper, we show that the HRL-based intent-driven orchestration mechanism is vastly effective in not only optimizing KPIs but also providing great flexibility and control to the operator. In this study, we have introduced a novel HRL-based xApp orchestration mechanism that can perform xApp management and provide recommendations for the best combination of xApps given the operator's intent. The optimal xApp orchestration scheme has led to a 7.5% increase in average system throughput and a 17.3% increase in energy efficiency compared to single xApp usage with no orchestration. In our future work, we plan to extend this orchestration to rApps and other xApps with complex KPI interactions.
§ ACKNOWLEDGEMENT
This work has been supported by MITACS and Ericsson Canada, and NSERC Canada Research Chairs and NSERC Collaborative Research and Training Experience Program (CREATE) under Grant 497981.
IEEEtran
|
http://arxiv.org/abs/2307.00879v1
|
20230703091931
|
Geometric renormalization of weighted networks
|
[
"Muhua Zheng",
"Guillermo García-Pérez",
"Marián Boguñá",
"M. Ángeles Serrano"
] |
physics.soc-ph
|
[
"physics.soc-ph",
"cond-mat.stat-mech"
] |
School of Physics and Electronic Engineering, Jiangsu University, Zhenjiang, Jiangsu, 212013, China
Algorithmiq Ltd, Kanavakatu 3 C, FI-00160 Helsinki, Finland
Departament de Física de la Matèria Condensada, Universitat de Barcelona, Martí i Franquès 1, E-08028 Barcelona, Spain
Universitat de Barcelona Institute of Complex Systems (UBICS), Universitat de Barcelona, Barcelona, Spain
[][email protected]
Departament de Física de la Matèria Condensada, Universitat de Barcelona, Martí i Franquès 1, E-08028 Barcelona, Spain
Universitat de Barcelona Institute of Complex Systems (UBICS), Universitat de Barcelona, Barcelona, Spain
ICREA, Passeig Lluís Companys 23, E-08010 Barcelona, Spain
The geometric renormalization technique for complex networks has successfully revealed the multiscale self-similarity of real network topologies and can be applied to generate replicas at different length scales. In this letter, we extend the geometric renormalization framework to weighted networks, where the intensities of the interactions play a crucial role in their structural organization and function. Our findings demonstrate that weights in real networks exhibit multiscale self-similarity under a renormalization protocol that selects the connections with the maximum weight across increasingly longer length scales. We present a theory that elucidates this symmetry, and that sustains the selection of the maximum weight as a meaningful procedure. Based on our results, scaled-down replicas of weighted networks can be straightforwardly derived, facilitating the investigation of various size-dependent phenomena in downstream applications.
Geometric renormalization of weighted networks
M. Ángeles Serrano
August 1, 2023
==============================================
Renormalization of real networks <cit.> can be performed on a geometric framework <cit.> by virtue of the discovery that their structure is underlain by a latent hyperbolic geometry <cit.>. Distances between nodes in this space determine the likelihood of connections via a universal law that operates at all scales and encodes simultaneously short- and long-range connections. This geometric principle has been able to explain many features of real networks, including the small-world property, scale-free degree distributions, and high levels of clustering, as well as fundamental mechanisms such as preferential attachment in growing networks <cit.>, and the emergence of communities <cit.>. It has also led to embedding techniques that produce geometric representations of complex network from their topologies <cit.>.
Weights in real complex networks <cit.> are also amenable to modeling within the hyperbolic network geometry paradigm. More specifically, the weighted geometric soft configuration model (W𝕊^D) <cit.> captures the non-trivial coupling between network topology and weights, allowing for accurate reproduction of both the unweighted and the weighted structure of real networks. However, the geometric renormalization (GR) method only applies to unweighted networks. By applying coarse graining and rescaling steps to unfold an unweighted network map into a sequence of scaled-down layers over progressively longer length scales, GR revealed multiscale self-similarity to be a ubiquitous symmetry in real networks <cit.>. This raises the question whether GR can be generalized to weighted networks as well and whether self-similarity would be preserved in that case.
Adding to GR, the geometric renormalizaton of weights (GRW) should produce the multiscale unfolding of a network into a shell of weighted scaled-down layers that preserve the weighted structure of the network in the flow. Here, we propose a theory for the renormalization of weighted networks that supports the selection of the maximum, or supreme, as an effective approximation to allocate weights in the renormalized layers of real networks. Our theory is sustained by the renormalizability of the W𝕊^D model, which entails that the GRW transformation should be a rescaled p-norm on the set of weights to be renormalized.
Alternatively, the GR technique was recently extended to weighted networks using an ad hoc approach that treats weights as currents or resistances in a parallel circuit—renormalizing by the sum of the weights or by the inverse of the sum of their inverses, respectively <cit.>. The two methods are recovered as particular limits of our theory.
To begin with, we provide evidence that self-similarity is a pervasive symmetry not only in the multiscale organization of real network topologies but also in the multiscale ulfonding of their weights.
To that end, we implement a GRW transformation, which requires the preliminary application of the GR technique to unweighted networks <cit.>.
The GR technique operates on the geometric embedding of a network, as described in previous works <cit.>, obtained by maximizing the likelihood that the network topology is generated by the geometric soft configuration model 𝕊^D <cit.>. In this model, nodes are assigned coordinates representing popularity and similarity dimensions, and distances between them determine the probability of connection p_ij=1/(1+χ_ij^β), where χ_ij=
d_ij/(μκ_iκ_j)^1/D. Parameter μ controls the average degree, and β>D controls the level of clustering and quantifies the level of coupling between the network topology and the geometry. The hidden degree κ_i of node i ∈ [1,N]—equivalent to a radial coordinate in the hyperbolic plane in the purely geometric formulation of the model, named ℍ^D+1 <cit.>—measures the popularity of the node, with higher values indicating a greater likelihood of connecting to other nodes. In D=1, the similarity subspace is represented as a circle of radius R = N / (2 π) with unit density. Each node i is assigned an angular coordinate θ_i in the circle, and angular distances d_ij = R Δθ_ij between pairs of nodes account for factors other than degrees that influence the tendency to form connections. Nodes closer in the similarity subspace have a higher likelihood of being connected. Hyperbolic embeddings of unweighted networks can be obtained using the Mercator mapping tool <cit.>, which employs statistical inference techniques to identify the hidden degrees and angular coordinates while adjusting parameters β and μ accordingly.
Once the geometric map of a real network is generated, GR divides the similarity circle into non-overlapping blocks of consecutive nodes of size r. These blocks are then coarse grained forming supernodes in a new layer. Each supernode is positioned within the angular region defined by the corresponding block, preserving the order of nodes. Any links between nodes in one supernode and nodes in another are renormalized into a single link connecting the two supernodes. This way, GR eliminates short-range couplings and produces a new network topology that is self-similar to the original except for the average degree, which
increases in the renormalization flow <cit.>.
The GRW technique involves assigning intensities to the links in the new layer based on the weights in the original layer, following a specific prescription. This transformation can be iterated starting from the original network at layer l=0, with the iteration bounded to approximately l_max∝log(N) steps due to the finite size of real networks. As a result, a sequence of self-similar network layers l—each r times smaller than the original one—is produced forming a multiscale weighted shell of the original network. The process is visually depicted in Fig. S1 of the Supplemental Material (SM). The crux of GRW lies in how the weights are renormalized
to ensure that their characteristics, such as global and local weight distributions and the relationship between strength and degree, are preserved throughout the renormalization flow.
An effective and simple prescription, referred to as sup-GRW, is to define the weight of the link between two supernodes as the maximum, or supremum, of the weights in the existing links between their constituent nodes in the original layer. We applied the sup-GRW technique to 12 different real weighted networks from different domains including biology, transportation, knowledge, and social systems. The networks were processed using blocks of size r = 2. Additional details can be found in the SM.
The behavior of the weights in the renormalization flow of two of the networks are shown in Fig. <ref>(a)-(d), while Figs. S2-S5 present the corresponding results for the remaining networks. The relations strength-degree are shown in Fig. S5. The probability density functions (pdf) of weights and strengths in the different layers collapse once rescaled by the average weight and average strength, respectively, in the corresponding layer. Furthermore, the power-law relations between strength and degree also overlap once the degrees are rescaled by the average degree of the layer, as demonstrated in Figs. S4 and S5. To quantify the local heterogeneity of the weights, we measured their disparity around nodes as a function of the degree, as described in the Methods section of the SM. The results show, again, statistical invariance across layers.
Notice that, by construction, the average weight and the average strength in the sup-GRW layers grows with l. While this behavior does not provide fundamental information for characterizing the description of the weighted structure of the network, it may still be interesting to understand how ⟨ w ⟩ and ⟨ s ⟩ depend on the scale of observation l. This is particularly relevant considering that weights in real networks are often expressed in real-world units. The corresponding results are presented in Figs. S6 and S7. Furthermore, the sup-GRW transformation exhibits the semigroup structure with respect to the composition, similar to the behavior observed in GR for unweighted networks. This means that a certain number of iterations with a given coarse graining factor are equivalent to a single transformation with a higher coarse graining factor. The findings shown in Fig. S8 provide support for this claim.
We also tested an alternative prescription, referred as sum-GRW, where weights in the new layer are assigned by summing the weights of existing links between the nodes in supernodes, following the prescription described in Ref. <cit.>. While this strategy proves effective for many real networks, there are certain cases in which self-similarity is not maintained in the renormalization flow. When sum-GRW is applied, the global distribution of weights, the local heterogeneity of weights in nodes, and the relation between strength and degree become increasingly heterogeneous compared to the original graph. This is observed in the Openflights and the scientific collaboration network, as illustrated in Fig. <ref>(e)-(h), and in Figs. S9 and S10 for the remaining networks.
The reported results are supported by a theoretical framework that clarifies the conditions under which each of the two weight assignment prescriptions, selecting the
supremum of weights between supernodes or their sum, yields good performance. Our theory is based on the W𝕊^D model <cit.>, that uses the 𝕊^D model to mimic the topology of real networks. In the W𝕊^D model, weights are assigned to connections between two connected nodes i and j as follows:
ω_ij = ϵ_ijνσ_i σ_j/( κ_i κ_j )^1 - α/D d_ij^α.
Similar to k̅_i ∝κ_i in the 𝕊^D model, the W𝕊^D model ensures that the expected strength of node i, s̅_i, is proportional to the hidden strength σ_i, s̅_i ∝σ_i. When α=0, the weights are independent of the underlying geometry and primarily influenced by node degrees, while α=D implies that weights are maximally coupled to the underlying metric space with no direct contribution of the degrees. Finally, ϵ_ij is a random variable with mean equal to one and the variance of which regulates the level of noise in the network. In the subsequent analysis, we assume the noiseless version of the model to simplify analytical calculations, which means ϵ_ij=1 ∀ (i,j).
To control the correlation between strength and degree and, consequently, adjust the strength distribution, we assume a deterministic relation between hidden variables σ and κ of the form σ = a κ^η, yielding s(k) ∼ ak^η as observed in real complex
networks. Working under this assumption, a valid GRW transformation should preserve the relation between strength and degree, and in particular the exponent η, meaning that the renormalized hidden degree and strength should satisfy σ' = a' (κ')^η (to simplify notation, we have used prima to denote quantities in the renormalized layer). Using Eq. (<ref>) and the GR equations for the topological model <cit.>, this requirement leads to the following expression for the renormalized weights
ω'_ij = C[ ∑_e=1^r^2( w_mn)_e^ϕ]^1/ϕ ,
where the sum runs over the links between nodes within supernodes i and j, derivation in SM. Parameter ϕ≡β/D(η - 1) + α depends on both the weighted and unweighted structure of the network, and C=ν'/ν( a'/a )^2 r^α/D. In practice, however, we rescale weights by the average weight in each layer, rendering the constant C irrelevant.
According to the weighted model, for a network with a specific value of ϕ, the GRW transformation of weights Eq. (<ref>), denoted as ϕ-GRW, preserves the exponent η that characterizes the relation between strength and degree. At the same time, since the distribution of hidden degrees is assumed to be preserved by GR, the distribution of hidden strengths and the distribution of weights are also preserved. This is valid as long as β > (γ - 1) / 2. Otherwise, the power-law distribution of hidden degrees looses its self-similarity in the unweighted renormalization flow and this breaks the self-similarity of weights. Also, note that the ϕ-GRW transformation has semigroup structure with respect to the composition, regardless of the value of ϕ.
We validated the self-similarity of the ϕ-GRW transformation in the real and synthetic networks, Figs. S11-S12 and Figs. S14-S17, respectively, including its semigroup property. In all cases, the self-similar behavior of the distribution of weights and strengths, and the power-law relation between strength and degrees in the renormalization flow is clear across length scales, which validates our analytic calculations.
Notice that the transformation in Eq. (<ref>) is a ϕ-norm, which is a generalization of the Euclidean norm. As ϕ increases, the ϕ-norm becomes progressively dominated by the supremum of the terms w_mn in Eq. (<ref>) . In fact, the sup-GRW prescription is recovered in the limit ϕ=∞ of ϕ-GRW. In addition, renormalizing by the sum is equivalent to setting ϕ=1, and the renormalization of weights by the inverse of the sum of inverse values corresponds to ϕ=-1.
To clarify the efficacy of approximating ϕ-GRW as sup-GRW, we checked the asymptotic behavior of the ϕ-norm as a function of the number of elements E in the set of coarse-grainable weights and of the level of heterogeneity in the weights, see section VI in SM for more details. Figure <ref> shows the result of applying the supremum and the sum prescriptions as compared with renormalizing weights using ϕ-GRW in two of the real networks analyzed in this letter, see Figs. S20 and S21 for the rest. In synthetic networks, we simulated weights using a distribution p(ω_mn)∼ω_mn ^-δ, where δ allowed us to tune the level of heterogeneity, and produced sets of weights that were renormalized using Eq. (<ref>) with C=1 and different values of ϕ. We also renormalized the same sets using the alternative sum and supremum prescriptions, the results are shown in Figs. S18 and S19.
In heterogeneous networks with a markedly scale-free character of the weight distribution, very small deviation from the supremum are observed and this occurs primarily for very low values of ϕ and low-weight values. As the number of elements E increases and the degree distribution becomes more homogeneous, these deviations progressively become larger. As expected, higher values of ϕ reduce the discrepancy between the ϕ-norm and the supremum estimator. Nevertheless, across
a wide range of parameter values, which encompass those for realistic networks, there is generally a good agreement between the ϕ-norm and the selection of the supremum, with any existing deviations being quite minor. While for some empirical weight distributions, sup-GRW and sum-GRW yield the same renormalized weights, e.g., the JCN in Figs. S20 and S21, it is important to note that, in general, the relation between hidden strength and hidden degree is not preserved under sum-GRW. See Methods section in SM for more details.
The preservation of the relation σ = a κ^η allows us to approximate analytically the flow of the average strength from the flow of the average degree. In GR, the average degree changes from layer to layer approximately as ⟨ k ⟩^(l+1)=r^ξ⟨ k ⟩^(l), with a scaling factor ξ depending on the connectivity structure of the original network <cit.>. Combining this with Eq. (<ref>) and imposing that the rescaling constant of weights C does not change in the flow, we obtain
⟨σ' ⟩= ⟨σ⟩ r^ψ, ψ=(α/D+2 η - 1)ξ - α/D,
which, due to the proportionality between observed and hidden strength, implies that the flow of the average observed strength follows the same scaling.
Therefore, in D=1, the strength increases with a scaling factor that depends on the exponent η, on the coupling α between topology and geometry, and on the scaling factor ξ for the flow of the average degree, see Methods in SM for details. This leads to an analytic approximation for the growth of the average strength as a function of the average degree
⟨ s ⟩^(l)=⟨ s ⟩^(0)( ⟨ k ⟩^(l)/⟨ s ⟩^(0)) ^ψ/ξ,
which agrees with the measurements in synthetic networks where the average weight may increase, stay flat, or decrease in the flow as shown in Fig. <ref>.
All together, our results suggest that sup-GRW is a good approximation for real networks and offers certain advantages over ϕ-GRW. One advantage is that it avoids the need to estimate parameters that capture the coupling between the weighted structure of the network and the underlying geometry, which can be challenging in practice. Sup-GRW is equivalent to setting ϕ=∞ and, due to the nature of the transformation, it is effectively reached for relatively low values of ϕ. In addition, renormalizing by the sum is equivalent to setting ϕ=1, which in general does not preserve the exponent η of the relation between σ and κ, see the Methods section in SM for analytical calculations.
Beyond theoretical considerations, the practical application of GRW extends to the generation of scaled-down replicas of weighted networks. These replicas can serve as
valuable testbeds for evaluating the scalability of computationally intensive protocols or studying processes where the size of a real network plays a role.
The generation of a scaled-down replica involves obtaining a reduced version of the topology, as described in Ref. <cit.>, and subsequently rescaling the weights in the renormalized network layer to mathc the level of the original network. The detailed procedure can be found in the SM, and the results for the scaled-down replicas of real weighted networks are presented in Figs. S25-S28.
In summary, the extension of the geometric renormalization framework to weighted networks demonstrates that multiscale self-similarity characterizes not only the
topology but also the weighted structure of real networks, provided the appropriate renormalization scheme is applied. Moreover, the weights in these networks result from processes that determine the intensities of interactions, and our findings suggest that these processes follow the same underlying principles across different length scales. Notably, the transformation implied by the theory is closely approximated by using the maximum weight prescription, a highly effective approach that can be readily applied to real networks despite the presence of significant noise affecting their weights. This observation justifies our confidence that noise will not fundamentally alter the qualitative results reported in this study.
The present work represents a significant step towards establishing a comprehensive framework for the renormalization of network structure and opens up possibilitis for renormalizing dynamical processes on real networks. In future research, it will be essential to incorporate not only the topology of connections and their weights but also their directionality, which is crucial in many real-world processes.
We thank Elisenda Ortiz for helpful discussions. M.Z. acknowledges support from National Natural
Science Foundation of China (Grants No. 12005079), the Natural Science Foundation of Jiangsu Province (Grant No. BK20220511),
the funding for Scientific Research Startup of Jiangsu University (Grant No. 4111710001), and Jiangsu Specially-Appointed Professor Program. M. A. S and M. B. acknowledge support from the Agencia Estatal de Investigación project number PID2019-106290GB-C22 funded by MCIN/AEI/10.13039/501100011033; Generalitat de Catalunya grant number 2021SGR00856. M. B. acknowledges support from the ICREA Academia award, funded by the Generalitat de Catalunya.
23
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[García-Pérez et al.(2018a)García-Pérez,
Boguñá, and Serrano]Garcia2018
author author G. García-Pérez, author M. Boguñá, and author M. Á. Serrano, @noop journal journal Nature Physics volume 14, pages 583 (year 2018a)NoStop
[Zheng et al.(2020)Zheng,
Allard, Hagmann, Alemán-Gómez, and Serrano]Zheng2019a
author author M. Zheng, author A. Allard,
author P. Hagmann, author Y. Alemán-Gómez, and author M. Á. Serrano, 10.1073/pnas.1922248117 journal journal Proceedings of the National Academy of Sciences volume 117, pages 20244 (year
2020)NoStop
[Garuccio et al.(2020)Garuccio, Lalli, and Garlaschelli]garuccio2020multiscale
author author E. Garuccio, author M. Lalli, and author D. Garlaschelli, @noop journal journal arXiv preprint
arXiv:2009.11024 (year 2020)NoStop
[Villegas et al.(2023)Villegas, Gili, Caldarelli, and Gabrielli]Villegas2022
author author P. Villegas, author T. Gili,
author G. Caldarelli, and author A. Gabrielli, @noop
journal journal Nature Physics volume 19, pages 445 (year
2023)NoStop
[Boguñá et al.(2021)Boguñá, Bonamassa,
Domenico, Havlin, Krioukov, and Serrano]boguna2020network
author author M. Boguñá, author I. Bonamassa, author M. D. Domenico, author S. Havlin,
author D. Krioukov, and author M. A. Serrano, @noop journal journal Nature Reviews
Physics volume 3, pages 114 (year 2021)NoStop
[Serrano and Boguñá(2022)]serrano_boguna_2022
author author M. A. Serrano and author M. Boguñá, 10.1017/9781108865791 title The Shortest Path to Network Geometry: A Practical Guide to Basic
Models and Applications, Elements in Structure and Dynamics of Complex
Networks (publisher Cambridge University Press, year 2022)NoStop
[Papadopoulos et al.(2012)Papadopoulos, Kitsak, Serrano,
Boguñá, and Krioukov]Papadopoulos2012
author author F. Papadopoulos, author M. Kitsak, author M. Á. Serrano, author M. Boguñá, and author D. Krioukov, @noop journal journal
Nature volume 489, pages 537
(year 2012)NoStop
[García-Pérez et al.(2018b)García-Pérez,
Serrano, and Boguñá]Garcia2018aa
author author G. García-Pérez, author M. Á. Serrano, and author M. Boguñá, @noop journal
journal Journal of Statistical Physics volume 173, pages 775 (year
2018b)NoStop
[Zuev et al.(2015)Zuev,
Boguná, Bianconi, and Krioukov]Zuev2015aa
author author K. Zuev, author M. Boguná,
author G. Bianconi, and author D. Krioukov, @noop
journal journal Scientific Reports volume 5, pages 9421 (year
2015)NoStop
[Boguñá et al.(2010)Boguñá, Papadopoulos, and Krioukov]Boguna2010
author author M. Boguñá, author F. Papadopoulos, and author D. Krioukov, @noop journal journal
Nature Communications volume 1, pages
62 (year 2010)NoStop
[Papadopoulos et al.(2015)Papadopoulos, Aldecoa, and Krioukov]Papadopoulos:2015ub
author author F. Papadopoulos, author R. Aldecoa, and author D. Krioukov, 10.1103/PhysRevE.92.022807 journal journal Phys. Rev. E volume
92, pages 022807 (year 2015)NoStop
[Muscoloni et al.(2017)Muscoloni, Thomas, Ciucci, Bianconi, and Cannistraci]muscoloni2017machine
author author A. Muscoloni, author J. M. Thomas, author S. Ciucci,
author G. Bianconi, and author C. V. Cannistraci, 10.1038/s41467-017-01825-5 journal journal Nat. Commun. volume 8, pages
1615 (year 2017)NoStop
[Blasius et al.(2018)Blasius, Friedrich, Krohmer, and Laue]blasius2018efficientembedding
author author T. Blasius, author T. Friedrich,
author A. Krohmer, and author S. Laue, 10.1109/TNET.2018.2810186 journal journal
IEEE/ACM Trans Netw volume 26, pages
920 (year 2018)NoStop
[García-Pérez et al.(2019)García-Pérez, Allard,
Serrano, and Boguñá]Garcia2019
author author G. García-Pérez, author A. Allard, author M. Á. Serrano, and author M. Boguñá, @noop journal journal
New Journal of Physics volume 21, pages 123033 (year 2019)NoStop
[Barrat et al.(2004)Barrat,
Barthelemy, Pastor-Satorras, and Vespignani]Barrat:2004b
author author A. Barrat, author M. Barthelemy,
author R. Pastor-Satorras, and author A. Vespignani, @noop journal journal Proceedings of the
National Academy of Sciences volume 101, pages 3747 (year 2004)NoStop
[Newman(2004)]newman2004analysis
author author M. E. Newman, @noop journal journal
Physical review E volume 70, pages
056131 (year 2004)NoStop
[Serrano et al.(2006)Serrano, Boguñá, and Pastor-Satorras]Serrano:2006fu
author author M. Á. Serrano, author M. Boguñá, and author R. Pastor-Satorras, @noop journal journal Physical Review E volume 74, pages 055101 (year 2006)NoStop
[Mastrandrea et al.(2014)Mastrandrea, Squartini, Fagiolo, and Garlaschelli]mastrandrea2014enhanced
author author R. Mastrandrea, author T. Squartini, author G. Fagiolo,
and author D. Garlaschelli, @noop journal journal New Journal of
Physics volume 16, pages 043022
(year 2014)NoStop
[Menichetti et al.(2014)Menichetti, Remondini, Panzarasa,
Mondragón, and Bianconi]menichetti2014weighted
author author G. Menichetti, author D. Remondini, author P. Panzarasa, author R. J. Mondragón, and author G. Bianconi, @noop journal journal PloS
one volume 9, pages e97857 (year 2014)NoStop
[Allard et al.(2017)Allard,
Serrano, García-Pérez, and Boguñá]allard2017geometric
author author A. Allard, author M. Á. Serrano, author G. García-Pérez, and author M. Boguñá, @noop journal
journal Nature Communications volume
8, pages 14103 (year 2017)NoStop
[Chen et al.(2022)Chen,
Su, and Zeng]9761989
author author D. Chen, author H. Su, and author Z. Zeng, 10.1109/TCSS.2022.3164975 journal journal IEEE
Transactions on Computational Social Systems , pages 1
(year 2022)NoStop
[Serrano et al.(2008)Serrano, Krioukov, and Boguñá]Serrano2008
author author M. Á. Serrano, author D. Krioukov, and author M. Boguñá, @noop journal journal Physical Review Letters volume 100, pages 078701 (year 2008)NoStop
[Krioukov et al.(2009)Krioukov, Papadopoulos, Vahdat, and Boguñá]Krioukov2009
author author D. Krioukov, author F. Papadopoulos, author A. Vahdat, and author M. Boguñá, @noop journal journal
Physical Review E volume 80, pages
035101(R) (year 2009)NoStop
arabic
[h]
Supplemental Material for
Geometric renormalization of weighted networks
Muhua Zheng^1, Guillermo García-Pérez^2, Marián Boguñá^3,4 & M. Ángeles Serrano^3,4,5*
^1 School of Physics and Electronic Engineering, Jiangsu University, Zhenjiang, Jiangsu, 212013, China
^2Algorithmiq Ltd, Kanavakatu 3 C, FI-00160 Helsinki, Finland
^3Departament de Física de la Matèria Condensada,
Universitat de Barcelona, Martí i Franquès 1, 08028 Barcelona, Spain
^4Universitat de Barcelona Institute of Complex Systems (UBICS),
Universitat de Barcelona, Barcelona, Spain
^5ICREA, Pg. Lluís Companys 23, E-08010 Barcelona, Spain
*Correspondence and requests for materials should be addressed to M.A.S. ([email protected])
§ ILLUSTRATION OF THE GEOMETRIC RENORMALIZATION TRANSFORMATION METHOD FOR WEIGHTED NETWORKS
§ METHODS
§.§ Description of empirical data sets
* Cargo ships.
The international network of global cargo ship movements consists of the
number of shipping journeys between pairs of major commercial ports in the world
in 2007 <cit.>.
* E. coli.
Weights in the metabolic network of the bacteria E. coli K-12 MG1655 consist
of the number of different metabolic reactions in which two metabolites
participate <cit.>.
* US commute.
The commuting network reflects the daily flow of commuters between counties
in the United States in 2000 <cit.>.
* Facebook like Social Network(Facebook).
The Facebook-like Social Network originate from an online community for students at University of California, Irvine, in the period between April to October 2004 <cit.>. In
this network, the nodes are students and ties are established when online messages are exchanged between the students. The weight of a directed tie is defined as the number of messages sent from one student to another. We discard the directions for any link and preserve the weight ω_ij with the sum of bidirectional messages, i.e., ω_ij=ω_i → j+ω_j→ i. Notice that we only consider the giant connected component of the undirected and weighted networks in this paper.
* Collaboration.
This is the co-authorship network of based on preprints posted to Condensed Matter section of arXiv E-Print Archive between 1995 and 1999 <cit.>. Authors are identified with nodes, and an edge exists between two scientists if they have coauthored at least one paper. The weights are the sum of joint papers. Notice that we only consider the giant connected component of the undirected and weighted networks in this paper.
* Openflights.
Network of flights among all commercial airports in the world, in 2010,
derived from the Openflights.org database <cit.>. Nodes represent the airports. The weights in this network refer to the number of routes between two airports. We discard the directions for any link and preserve the weight ω_ij with the sum of bidirectional weights, i.e., ω_ij=ω_i → j+ω_j→ i. Notice that we only consider the giant connected component of the undirected and weighted networks in this paper.
* Journal Citation Network (JCN). The citation networks from 1900 to 2013 were reconstructed from data on citations between scientific articles extracted from the Thomson Reuters Citation Index <cit.>. A node corresponds to a journal with publications in the given time period. An edge is connected from journal i to journal j if an article in journal i cites an article in journal j, and the weight of this link is taken to be the number of such citations. In this work, we use undirected and weighted networks generated from 3 different time windows, 2008-2013, 1985-1990 and 1965-1975. The data are obtained from Ref. <cit.>.
* New Zealand Collaboration Network (NZCN). This is a network of scientific collaborations among institutions in New Zealand. Nodes are
institutions (universities, organizations, etc.) and edges represent collaborations between them. In particular, two nodes i, j are connected if Scopus lists at least one publication with authors at institutions i and j, in the period 2010-2015. The weights of edges record the number of such collaborations. The data are obtained from Ref. <cit.>. Notice that we only consider the giant connected component of the undirected and weighted networks in this paper.
* Poppy and foxglove hypocotyl cellular interaction networks.
These networks capture global cellular connectivity within the hypocotyl (embryonic stem) of poppy and foxglove. Nodes represent cells and edges are their physical associations in 3D space. Edges are weighted by the size of shared intercellular interfaces, and nodes annotated with cell type. The data are obtained from Ref. <cit.>.
Network statistics can be found in Table S1.
§.§ Network embedding to produce geometric network maps
We embed each considered network into hyperbolic space using the algorithm introduced in Ref. <cit.>, named Mercator. Mercator takes the network adjacency matrix A_ij (A_ij=A_ji=1 if there is a link between nodes i and
j, and A_ij=A_ji=0 otherwise) as input and then returns inferred hidden degrees, angular positions of nodes and global model parameters. More precisely, the hyperbolic maps were inferred by finding the hidden degree and angular position of each node, {κ_i} and {θ_i}, that maximize the likelihood ℒ that the structure of the network was generated by the 𝕊^1 model, where
ℒ = ∏_i<j[ p_ij]^A_ij[ 1 - p_ij]^1 - A_ij ,
and p_ij=1/(1+χ_ij^β) is the connected probability.
§.§ The definition of disparity
The disparity of nodes. The disparity quantifies the local heterogeneity of the weights attached to a given node i and is defined as
Y(k_i)=∑_j( ω_ij/∑_j ω_ij) ^2
where ω_ij is the weight of the link between node i and its neighbor j. From this definition, we see that the disparity scales as Y ∼ k_i^-1, whenever the weights are roughly homogeneously distributed among the
links. Conversely, whenever the disparity decreases slower than k_i^-1 implies that
weights are heterogeneous and that the large strength of a node is due to a handful
of links with large weights.
§.§ Theoretical derivation of the renormalized weights
Under GR, the hidden variables of supernodes in the resulting layer, κ^' and θ^', are calculated as a function of the hidden variables of the constituent nodes as
κ'=[ ∑_j=1^r(κ_j)^β] ^1/βand θ'=[ ∑_j=1^r(θ_jκ_j)^β/∑_j=1^r(κ_j)^β] ^1/β.
The expressions above and Eq. (1) in main text altogether imply that the renormalized weight should be
ω'_ij = ν' σ'_i σ'_j/( κ'_i κ'_j )^1 - α'/D d_ij^'α' = ν' (a')^2 d_ij^'-α'( κ'_i κ'_j )^η - 1 + α'/D
= ϵ'_ijν' (a')^2 d_ij^'-α'[ ( κ'_i κ'_j )^β/D]^D(η - 1) + α'/β
= ν' (a')^2 d_ij^'-α'[ ∑_e=1^r^2( κ_m κ_n )_e^β/D]^D(η - 1) + α'/β
= ν' (a')^2 d_ij^'-α'[ ∑_e=1^r^2( w_mn/ν a^2 d_mn^-α)_e^β/D(η - 1) + α]^D(η - 1) + α'/β.
In the last step, we have assumed that, for every pair of nodes (m, n), we can obtain the product κ_m κ_n from the corresponding weight ω_mn, which is not true in general, as some links might not exist. However, this should be a reasonable approximation, since it only misses the smallest products of hidden degrees. Now, the above transformation cannot be performed without the precise distances in the embedding, as it depends on d_mn, but recalling that d_mn = R Δθ_mn, where Δθ_mn stands for the angular separation between the nodes, and the fact that all such distances are approximately equal to the angular separation between the supernodes to which the nodes belong (Δθ_mn≈Δθ'_ij), we can see that fixing α' = α will remove all dependency on the distance,
ω'_ij = ν' (a')^2 d_ij^'-α[ ∑_e=1^r^2( w_mn/ν a^2 d_mn^-α)_e^β/D(η - 1) + α]^D(η - 1) + α/β
= ν'/ν( a'/a)^2 ( R Δθ'_ij/R' Δθ'_ij)^α[ ∑_e=1^r^2( w_mn)_e^β/D(η - 1) + α]^D(η - 1) + α/β
= ν'/ν( a'/a)^2 r^α/D[ ∑_e=1^r^2( w_mn)_e^β/D(η - 1) + α]^D(η - 1) + α/β,
where we have used that R' = R/r^1/D.
Finally, we can choose any appropriate relation between primed and unprimed global parameters leading to
ω'_ij = C[ ∑_e=1^r^2( w_mn)_e^ϕ]^1/ϕ,
with ϕ≡β/D(η - 1) + α and C=ν'/ν( a'/a)^2 r^α/D. Therefore, the weighted model predicts that the exponent η characterizing the relation between strength and degree is preserved in the renormalized network if weights are transformed following Eq. (<ref>) (in the noiseless case) and the value of ϕ that corresponds to the considered network is used.
§.§ Theoretical derivation of the flow of the average strength
We start from Eq. (<ref>) (D=1) and impose that the rescaling variable
C=ν'/ν( a'/a)^2 r^α
is constant in the flow such that the transformation of weights keeps the same units in all scales of observation. The transformation of the relation between hidden strength and hidden degree is
a'/a=⟨σ⟩'/⟨σ⟩⟨κ^η⟩/⟨κ^η⟩'.
We can also obtain the transformation of the free parameter ν using its expression from <cit.> and the expression for the parameter μ,
ν=Γ(1/2)/2π^1/2μ^1-αI_2I_3 ⟨σ⟩ μ=Γ(1/2)/2π^1/2I_1⟨ k ⟩,
which leads to
ν'/ν=⟨σ⟩/⟨σ⟩'(⟨ k ⟩'/⟨ k ⟩)^1-α
and therefore to
C=⟨σ⟩'/⟨σ⟩(⟨κ^η⟩/⟨κ^η⟩')^2 r^ξ(1-α)+α,
where we have used the expression for the flow of the average degree. We use ⟨κ^η⟩=γ-1/γ-1-ηκ_0^η (η<γ-1) to compute its flow, and we obtain
⟨κ^η⟩'/⟨κ^η⟩=r^ξη.
Finally,
C=⟨σ⟩'/⟨σ⟩r^ξ(1-2η-α)+α=⟨σ⟩'/⟨σ⟩r^-ψ,
and we impose C=1 to obtain
⟨σ' ⟩= ⟨σ⟩ r^ψ, ψ=(α/D+2 η - 1)ξ - α/D,
from which ψ>0 implies an increasing average strength in the flow while it decreases if ψ<0.
§.§ The transformation sum-GRW does not preserve the relation between strength and degree
The sum-GRW transformation is
w'_ij =∑_e=1^r^2ϵ_mn w_mn
=ν d_mn^-α∑_e=1^r^2ϵ_mnσ_m σ_n ( κ_m κ_n )^α/D-1,
where e runs over all pairs of nodes (m, n) with m in supernode i and n in supernode j and d_mn = R Δθ_mn, where Δθ_mn stands for the angular separation between the nodes. All such distances are approximately equal to the angular separation between the supernodes to which the nodes belong (Δθ_mn≈Δθ'_ij), and one can take α' = α. Comparing Eq. (1) in main text and (<ref>), we can write
ν' d_ij^'-α'=ν d_ij^-α,
ϵ_ij' σ_i' σ_j' ( κ_i' κ_j' )^α'/D-1 =∑_e=1^r^2ϵ_mnσ_m σ_n ( κ_m κ_n )^α/D-1,
and using R' = R/r^1/D and Eq. (<ref>) altogether, we have
ν'=ν r^-α/D.
Therefore, in the noiseless version (ϵ_mn = 1 ∀ (m,n)),
we can obtain the hidden strength σ_i' in the supernodes layer as
σ_i' =∑_m=1^rσ_m κ_m ^α/D-1/κ_i'^α/D-1≉κ_i'^η,
which proves that, in general, the relation between hidden strength and hidden degree is not preserved under sum-GRW.
§.§ Scale down replicas
* We obtain a renormalized network layer by applying the sup-GRW method with a given value of r and number of iterations to match the target network size.
* Typically, the average degree of the renormalized network layer is higher than the original one. Thus, to obtain a scaled down network replica of the topology, we decrease the average degree in the renormalized layer to that in the original network as explained in Ref. <cit.>, such that ⟨ k_new^(l)⟩=⟨ k^(0)⟩. The main idea is to reduce the value of μ^(l) to a new
one μ_new^(l), which means that the connection probability of every pair of nodes (i,j), p_ij^(l) decreases to p_ij, new^(l). Therefore, the probability for a link to exist in the pruned network reads:
p_ij,new^(l)=1/1+( d_ij/μ_new^(l)κ_iκ_j) ^β.
In particular, we prune the links using μ_new^(l)=h ⟨ k^(0)⟩/⟨ k^(l)⟩μ^(l) with h=1 as initial value. After an iteration for all the links in the layer, we give h a new value h(1-0.1u) → h if ⟨ k_new^(l)⟩> ⟨ k^(0)⟩, where u ∈ [0,1) is a random variable from a uniform distribution. If ⟨ k_new^(l)⟩< ⟨ k^(0)⟩, h(1+0.1u) → h. The procedure stops when |⟨ k_new^(l)⟩- ⟨ k^(0)⟩| is below a given threshold, that we set to 0.1.
* Finally, we rescale the weights in the resulting network by a global factor to match the average weight of the original network. Specifically, we calculate the average weight ⟨ w_new^(l)⟩ of the resulting network from step (2) and the average weight ⟨ w^(0)⟩ in the original network. Then we rescale the weight of each link by the factor c=⟨ w^(0)⟩/⟨ w_new^(l)⟩.
§ RESULTS FOR SUP-GRW
§.§ sup-GRW in empirical data
§.§ Semigroup structure in sup-GRW transformation
The geometric renormalization transformation has Abelian semigroup structure with respect to the composition, meaning that a certain number of iterations of a given resolution are equivalent to a single transformation of higher resolution.
We here validated the semigroup structure in sup-GRW transformation with synthetic and empirical networks. Given an original network, we performed the sup-GRW
with r=2 and r=4, respectively. When the geometric renormalization transformation to the same network size, we compared the their network properties. Figure <ref> shows the results for a representative synthetic network.
§ RESULTS FOR SUM-GRW
§ RESULTS FOR Φ-GRW
§.§ ϕ-GRW in empirical data
§.§ ϕ-GRW in synthetic networks
§.§.§ Semigroup structure
§.§.§ The influence of α
§.§.§ The influence of η
§ THE ASYMPTOTIC BEHAVIOR OF THE Φ-NORM
To better understand the behavior of ϕ normalization in different weighted distributions, we perform the following test. We generate the sampled weights ω_mn according to the weight distribution p(ω_mn)∼ω_mn ^-δ. The smaller values of the exponent δ, the more heterogeneous of the distributions.With the sampled weights ω_mn on hand, we can normalize the weight as:
ω'(ϕ)= [ ∑_e=1^E( ω_mn)^ϕ]^1/ϕ,
where E is the number of sampled weights combining into new weight ω'.
The implementation process is as follows.
(1) We firstly generate a weight list w_list with sampled length N=20000 from distribution p(ω_mn)∼ω_mn ^-δ.
(2) We divide the weight list w_list into non-overlapping groups in sequence, where each group's size equals E. In other words, each group has E samples ω_mn. We then calculate ω'(ϕ) wit Eq.(<ref>) in each group for different ϕ.
(3) We compare ω'(ϕ) with ω'(ϕ=1) and ω'(ϕ=∞). Note that sum-GRW corresponds to the case ϕ=1 while sup-GRW to ϕ=∞.
We have two ways to check the asymptotic behavior of the ϕ-norm in the empirical weighted distributions. The first one is as simple as the one in synthetic distribution. We only need to replace the weight list w_list with the empirical data. In this case, the samples ω_mn in each group are uncorrelated. However, in the ϕ-GRW process, the weights in the same group may relate to the coordinates of sub-nodes m and n. Therefore, we implement the second way to check the asymptotic behavior.
(1) We implement the ϕ-GRW process with one layer. There may have E weights links between the constituent nodes of two supernodes. Note that when r=2, the number of links E could be 1, 2, 3 or 4. So, we divide the empirical weights into different group, where each group's size equals E.
(2) We then calculate ω'(ϕ) wit Eq.(<ref>) in each group for different ϕ.
(3) We compare ω'(ϕ) with ω'(ϕ=1) and ω'(ϕ=∞). Note that sum-GRW corresponds to the case ϕ=1 while sup-GRW to ϕ=∞.
In the end, we find that the results obtained by these two ways are robust. We only show the results for the sets following the coarse-graining (i.e., the second way) in this paper.
§.§.§ Synthetic weighted distributions
§.§.§ Empirical weighted distributions
§ RESULTS FOR RANDOM-GRW
A prescription selecting the weight between two supernodes at random from the coarse-grainable set would always result in the self-similarity of the distribution of weights if the selection set is supplemented with the links between nodes in the same supernode.
However, those links are coarse-grained in the renormalization process and the balance between the weights of links inside supernodes and of links between nodes in different supernodes dictates in which situations the random selection works. Experiments in synthetic networks, Figs. S22 and S23, prove that the heterogeneity of the distribution of weights favors a better self-similar scaling.
Decreasing the coupling of weights with topology and geometry in the W𝕊^D model produces more homogeneous distributions of weights,
which causes the loss of self-similarity in the flow, see SI Fig. S24 for results relative to the metabolic network of E. coli.
§.§ Random-GRW in synthetic network
§.§ Random-GRW in E. coli network
§ SCALED DOWN REPLICAS OF WEIGHTED NETWORKS WITH SUP-GRW
15
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Kaluza et al.(2010)Kaluza,
Kölzsch, Gastner, and Blasius]kaluza2010
author author P. Kaluza, author A. Kölzsch,
author M. T. Gastner, and author B. Blasius, title title The complex network of global cargo ship
movements, @noop journal journal
Journal of the Royal Society Interface volume 7, pages 1093 (year 2010)NoStop
[Serrano et al.(2012)Serrano, Boguná, and Sagués]serrano2012
author author M. Á. Serrano, author M. Boguná, and author F. Sagués, title title Uncovering the hidden
geometry behind metabolic networks, @noop journal
journal Molecular Biosystems volume
8, pages 843 (year 2012)NoStop
[Orth et al.(2011)Orth,
Conrad, Na, Lerman,
Nam, Feist, and Palsson]orth2011
author author J. D. Orth, author T. M. Conrad,
author J. Na, author
J. A. Lerman, author
H. Nam, author A. M. Feist, and author B. Ø. Palsson, title title
A comprehensive genome-scale reconstruction of escherichia coli
metabolism—2011, @noop journal journal Molecular Systems Biology volume 7, pages 535 (year 2011)NoStop
[Grady et al.(2012)Grady,
Thiemann, and Brockmann]grady2012
author author D. Grady, author C. Thiemann, and author D. Brockmann, title title Robust classification of salient links
in complex networks, @noop journal journal Nature Communications volume 3, pages 1 (year 2012)NoStop
[Panzarasa et al.(2009)Panzarasa, Opsahl, and Carley]panzarasa2009
author author P. Panzarasa, author T. Opsahl, and author K. M. Carley, title title Patterns and dynamics of users'
behavior and interaction: Network analysis of an online community, @noop journal journal Journal of the
American Society for Information Science and Technology volume 60, pages 911 (year 2009)NoStop
[Opsahl and Panzarasa(2009)]opsahl2009
author author T. Opsahl and author P. Panzarasa, title title Clustering in weighted
networks, @noop journal journal Social
Networks volume 31, pages 155
(year 2009)NoStop
[Newman(2001)]newman2001
author author M. E. Newman, title title The structure of
scientific collaboration networks, @noop journal
journal Proceedings of the National Academy of Sciences volume 98, pages 404 (year
2001)NoStop
[Opsahl(2011)]Opsahl2011
author author T. Opsahl, @noop title Why anchorage is not (that)
important: Binary ties and sample selection, howpublished
<http://wp.me/poFcY-Vw> (year 2011), note
accessed: 2022-3-1NoStop
[Hric et al.(2018)Hric,
Kaski, and Kivelä]Hric2018
author author D. Hric, author K. Kaski, and author M. Kivelä, title title Stochastic block model reveals maps of citation
patterns and their evolution in time, @noop journal
journal Journal of Informetrics volume
12, pages 757 (year 2018)NoStop
[Zheng et al.(2021)Zheng,
García-Pérez, Boguñá, and Serrano]zheng2021
author author M. Zheng, author G. García-Pérez, author M. Boguñá, and author M. Á. Serrano, title title Scaling up
real networks by geometric branching growth, @noop journal journal Proceedings of the National Academy of
Sciences volume 118 (year
2021)NoStop
[Aref et al.(2018)Aref,
Friggens, and Hendy]Aref2018analysing
author author S. Aref, author D. Friggens, and author S. Hendy, title title Analysing scientific collaborations of new
zealand institutions using scopus bibliometric data, in @noop
booktitle Proceedings of the Australasian Computer
Science Week Multiconference (year 2018) pp. pages 1–10NoStop
[Jackson et al.(2017)Jackson, Xu, Duran-Nebreda, Stamm, and Bassel]Jackson2017
author author M. D. Jackson, author H. Xu,
author S. Duran-Nebreda, author P. Stamm, and author
G. W. Bassel, title
title Topological analysis of multicellular complexity in the
plant hypocotyl, @noop journal journal
Elife volume 6, pages e26023
(year 2017)NoStop
[García-Pérez et al.(2019)García-Pérez, Allard,
Serrano, and Boguñá]Garcia2019
author author G. García-Pérez, author A. Allard, author M. Á. Serrano, and author M. Boguñá, title title Mercator:
uncovering faithful hyperbolic embeddings of complex networks, @noop
journal journal New Journal of Physics volume 21, pages 123033 (year 2019)NoStop
[García-Pérez et al.(2018)García-Pérez, Boguñá, and Serrano]Garcia2018
author author G. García-Pérez, author M. Boguñá, and author M. Á. Serrano, title title Multiscale
unfolding of real networks by geometric renormalization, @noop
journal journal Nature Physics volume 14, pages 583 (year
2018)NoStop
[Allard et al.(2017)Allard,
Serrano, García-Pérez, and Boguñá]allard2017geometric
author author A. Allard, author M. Á. Serrano, author G. García-Pérez, and author M. Boguñá, title title
The geometric nature of weights in real complex networks, @noop
journal journal Nature Communications volume 8, pages 14103 (year
2017)NoStop
|
http://arxiv.org/abs/2307.03310v1
|
20230706214901
|
Finding the Dynamics of an Integrable Quantum Many-Body System via Machine Learning
|
[
"Victor Wei",
"Alev Orfi",
"Felix Fehse",
"W. A. Coish"
] |
quant-ph
|
[
"quant-ph",
"cond-mat.mes-hall"
] |
[email protected]
[email protected]
Department of Physics, McGill University, Montreal, QC, Canada
Institute for Quantum Computing, University of Waterloo, Waterloo, ON, Canada
Department of Physics and Astronomy, University of Waterloo, ON, Canada
[email protected]
[email protected]
Department of Physics, McGill University, Montreal, QC, Canada
We study the dynamics of the Gaudin magnet (“central-spin model”) using machine-learning methods. This model is of practical importance, e.g., for studying non-Markovian decoherence dynamics of a central spin interacting with a large bath of environmental spins and for studies of nonequilibrium superconductivity. The Gaudin magnet is also integrable, admitting many conserved quantities: For N spins, the model Hamiltonian can be written as the sum of N independent commuting operators. Despite this high degree of symmetry, a general closed-form analytic solution for the dynamics of this many-body problem remains elusive. Machine-learning methods may be well suited to exploiting the high degree of symmetry in integrable problems, even when an explicit analytic solution is not obvious. Motivated in part by this intuition, we use a neural-network representation (restricted Boltzmann machine) for each variational eigenstate of the model Hamiltonian. We then obtain accurate representations of the ground state and of the low-lying excited states of the Gaudin-magnet Hamiltonian through a variational Monte Carlo calculation. From the low-lying eigenstates, we find the non-perturbative dynamic transverse spin susceptibility, describing the linear response of a central spin to a time-varying transverse magnetic field in the presence of a spin bath. Having an efficient description of this susceptibility opens the door to improved characterization and quantum control procedures for qubits interacting with an environment of quantum two-level systems. These systems include electron-spin and hole-spin qubits interacting with environmental nuclear spins via hyperfine interactions or qubits with charge or flux degrees of freedom interacting with coherent charge or paramagnetic impurities.
Finding the Dynamics of an Integrable Quantum Many-Body System via Machine Learning
W. A. Coish
August 1, 2023
===================================================================================
§ INTRODUCTION
Predicting the dynamics of quantum many-body systems is crucial for understanding many important physical phenomena. For example, the dynamics of the Fermi-Hubbard model can advance our understanding of superconductivity and quantum magnetism in correlated materials <cit.>. However, brute-force numerical approaches such as exact diagonalization on a classical computer have an exponential cost in time and/or memory and can therefore only be used to simulate small quantum systems. To tackle the problem of quantum many-body simulation, a large-scale fault-tolerant quantum computer could be used to efficiently run quantum simulations with predictable bounded errors <cit.>. The hardware challenge behind building a useful quantum computer is, however, significant. General-purpose quantum simulation on a quantum computer may not be feasible until far in the future.
Despite the limitations of classical computers in simulating quantum systems, there are nevertheless many classical algorithms that are effective in special cases. A notable example is the variational Monte Carlo (VMC) method, which finds an approximate ground state upon minimizing the estimated energy with respect to a variational ansatz <cit.>. Recently, Carleo et al. have used a neural-network ansatz in VMC to calculate ground states of many-body Hamiltonians with a better accuracy than other existing methods <cit.>. A number of follow-up works on neural-network quantum states also strongly suggest that neural networks are a promising model for representing a large subset of quantum states and for approximating the ground states of many important many-body Hamiltonians <cit.>. Using this type of ansatz, it is also possible to leverage well-studied optimization strategies from the deep-learning community.
A number of recent works have extended neural-network methods from the realm of static properties to many-body quantum dynamics <cit.>. These have demonstrated success in some test cases (often where an analytic solution is already available), but they are unlikely to be successful for any arbitrary problem <cit.>. Finding a subclass of problems that have known practical applications, that are nontrivial (where no closed-form analytic solution is available), and that have a good chance of admitting an efficient neural-network solution is an important open question. Here, we apply machine-learning methods to determine the low-energy spectrum and non-perturbative low-frequency dynamic response for one such model: the Gaudin magnet (Figure <ref>). This model applies directly to the many-body decoherence dynamics of a “central-spin” problem (describing, e.g., an electron spin qubit interacting with a bath of nuclear spins) <cit.>. A linear combination of commuting Gaudin magnets can also result in the Bardeen-Cooper-Schrieffer (BCS) pairing Hamiltonian describing non-equilibrium superconductivity, with fermionic creation/annihilation operators represented in terms of Anderson pseudospins <cit.>. A complete solution for the dynamics of the Gaudin magnet thus implies a solution for BCS pairing dynamics (through the composition of a product of commuting unitary time-evolution operators, one for each Gaudin-magnet problem). The Gaudin magnet is an integrable system, with a large number of conserved quantities. This suggests that drastic simplifications are possible. Although these simplifications are difficult to realize through direct analytic study, a well-designed machine-learning procedure may be able to exploit the high degree of symmetry in this problem.
For integrable N-particle systems, the O(e^N) set of linear equations that would normally describe the Schrödinger equation in a complete basis can be recast into a set of O(N) nonlinear Bethe ansatz equations <cit.>. The Gaudin magnet is one such system, and its integrability has been exploited in a number of works to obtain eigenstates and observable dynamics <cit.>. Despite the reduction to a much smaller system of Bethe ansatz equations, solving these nonlinear equations is still nontrivial in general. For example, a hybrid method based on a combination of the algebraic Bethe ansatz and direct Monte Carlo sampling proposed by Faribault and Schuricht is still limited to modest system sizes (N≲ 50) <cit.>.
Knowing that the eigenstates of the Gaudin magnet can be derived from O(N) nonlinear equations, we view a neural-network quantum state ansatz as a promising candidate for obtaining the low-lying eigenstates and dynamics of this system. Neural networks such as the restricted Boltzmann machine (RBM) <cit.> can be used to describe quantum states using a number of network parameters that grows only polynomially in the system size N, but they can nevertheless be used to describe complex quantum states through the nonlinearity induced by tracing over hidden layers. We speculate that neural-network methods may be able to calculate the low-lying eigenstates by learning the symmetries of the integrable Gaudin magnet and other integrable models.
In this work, we calculate the low-lying eigenstates of the Gaudin magnet (central-spin model). To do this, we use a penalty-based variational algorithm with an RBM neural-network ansatz. We also compute the non-perturbative transverse spin susceptibility for the central spin, giving the linear response of the central spin to a time-varying magnetic field, accounting for highly accurate representations of the many-body eigenstates of the central spin interacting with environmental spins. We find these approximate eigenstates (and the associated transverse spin susceptibility) numerically in a regime where conventional perturbation theory fails.
The rest of this paper is organized as follows: In Section <ref>, we introduce the Gaudin-magnet model Hamiltonian and describe a particular physically relevant set of coupling constants. In Section <ref>, we describe the variational ansatz and variational algorithms used to find accurate representations of the ground state and several low-lying excited states. Section <ref> describes the dynamic transverse spin susceptibility for this problem and presents sample calculations for the non-perturbative spectral function and linear response to a low-frequency transverse field. Section <ref> summarizes the potential advantages and shortcomings of this approach, along with conclusions and possible future directions. A technical derivation of the gradient expression used for gradient-based optimization is given in the Appendix.
§ MODEL
The central-spin model (Gaudin magnet) consists of one central spin coupled to N environment spins (Figure <ref>) <cit.>. The central spin is coupled to the k^th environment spin through a Heisenberg interaction with coupling coefficient A_k. The central spin alone is additionally subject to a constant external field B. The Hamiltonian for the Gaudin magnet is thus
H=BS_0^z+∑_k=1^NA_k𝐒_0·𝐒_k,
where we have set ħ=1, S_j^α=1/2σ_j^α (α=x,y,z) is the spin-1/2 operator for spin j (j=0 for the central spin and j=k=1,2,…,N for the environment spins). The operator σ_j^α (α=x,y,z) is the Pauli operator for spin j.
In numerical evaluations, we choose the coupling coefficients A_k to have an exponential distribution,
A_k = A/N_0e^-(k-1)/N_0.
The total number of environment spins in the model is N while N_0 controls the scale where the coupling coefficients decay exponentially. We will typically be interested in systems where N>N_0; in numerical evaluations below, we choose N_0=(N+1)/2. An exponentially decaying coupling strength corresponds, e.g., to the distribution of hyperfine couplings expected for an electron spin (central spin 𝐒_0) interacting with nuclear spins (environment spins 𝐒_k) in a two-dimensional quantum dot with parabolic confinement <cit.>. In this example, N_0 is the number of nuclear spins within a quantum-dot Bohr radius, while N→∞ is the number of nuclear spins in the entire crystal. The low-lying spectrum of this model is shown for N=5 [N_0=(N+1)/2=3] in Figure <ref> for a range of B. In dynamics calculations below, we set B=0.35 A/N_0 (blue dashed line in Figure <ref>), chosen to avoid degeneracies up to the fourth excited state. For B≫ A, we can directly apply perturbation theory to give the eigenstates of H as the simultaneous eigenstates of all S^z_j and in this case the spectrum is trivial.
However, for B≲ A, this simple perturbation theory does not generally apply and more advanced methods are required to understand the detailed spin dynamics arising from this model.
§ VARIATIONAL ALGORITHMS
In this section, we introduce the RBM variational ansatz used to approximate the eigenstates of the Hamiltonian H. We then describe the variational algorithms used to optimize the RBM network parameters and we explain strategies used to select accurate representations of the states.
§.§ Variational Ansatz
We represent a quantum state vector with an RBM ansatz <cit.>. An RBM is a two-layer network model (Figure <ref>), fully defined by a set of parameters 𝒲= {a_j,b_i,w_ji}, where here we choose the parameters to be complex-valued. The RBM representation of a quantum state |ψ⟩ in the computational basis (σ_j^z-eigenbasis) σ = σ_0,... σ_N is
|ψ⟩=∑_σC_σ|σ⟩∝∑_σΨ_𝒲(σ)|σ⟩ =: |𝒲⟩,
where the “∝” symbol indicates that the RBM representation of a quantum state is generally unnormalized and where σ_j^z|σ>=σ_j|σ> with σ_j=± 1. The unnormalized amplitude Ψ_𝒲(σ) of a particular spin configuration is given by
Ψ_𝒲(σ) = ∑_{h_i}e^∑_ja_jσ_j+∑_ib_ih_i+∑_ijw_jiσ_jh_i,
where a_0,...,a_N and b_1,...,b_M are the bias weights for the visible and hidden nodes, respectively. The visible nodes are associated with Ising variables σ_j=± 1 and the hidden nodes are assigned values h_i=± 1. The parameters w_ji assign an independent weight to each link between a visible node j and a hidden node i. For simplicity, we set the number of hidden nodes to be equal to the number of visible nodes, N+1/M = 1. In general, this need not be the case and increasing the number of hidden nodes for a fixed number of visible nodes can increase the expressivity of the RBM. Marginalizing over (tracing out) the hidden-node variables h_i yields a nonlinear function that only takes the variables σ as its input and this function is fully determined by the network parameters 𝒲.
§.§ Ground State
To find the ground state, we minimize the energy expectation value <cit.>,
E(𝒲) =
⟨𝒲|H|𝒲⟩/⟨𝒲||𝒲⟩
=∑_σ,σ'Ψ_𝒲^*(σ)⟨σ|H|σ'⟩Ψ_𝒲(σ')/∑_σ”|Ψ_𝒲(σ”)|^2
= ∑_σ(∑_σ'⟨σ|H|σ'⟩Ψ_𝒲(σ')/Ψ_𝒲(σ))|Ψ_𝒲(σ)|^2/∑_σ”|Ψ_𝒲(σ”)|^2
≈⟨∑_σ'⟨σ̃|H|σ'⟩Ψ_𝒲(σ')/Ψ_𝒲(σ̃)⟩_σ̃ =: ⟨ E_local⟩_σ̃.
Calculating the exact expectation value would generally involve a sum over 2^N+1 basis states σ. To avoid an exponential cost in the computation time, we instead evaluate this quantity approximately ⟨ ... ⟩_σ̃ via samples σ̃ obtained from the Metropolis algorithm <cit.>, where the samples are drawn from the probability distribution,
π(σ;𝒲)=|Ψ_𝒲(σ)|^2/∑_σ'|Ψ_𝒲(σ')|^2.
We use the stochastic reconfiguration method <cit.> to minimize the energy subject to variations in the parameters 𝒲. Stochastic reconfiguration exploits information encoded in the quantum geometric tensor (the covariance of the logarithmic derivative of Ψ_𝒲) to precondition the gradient used in stochastic gradient descent (SGD). The energy gradient (force vector) is defined as
F_i = ∂ E/∂𝒲_i^* = ⟨ E_local𝒪^†_i⟩_σ̃ - ⟨ E_local⟩_σ̃⟨𝒪^†_i⟩_σ̃,
where the logarithmic derivative of the RBM state vector is defined as
𝒪_k = 1/Ψ_𝒲(σ)∂_𝒲_kΨ_𝒲(σ).
The partial derivative in Equation (<ref>) is taken with respect to the complex conjugate of the variational parameter, since we are looking for the steepest descent of a real-valued function parameterized by complex-valued variables <cit.>. The quantum geometric tensor is defined as
S_ik = ⟨𝒪^†_i𝒪_k⟩_σ̃ - ⟨𝒪^†_i⟩_σ̃⟨𝒪_k⟩_σ̃.
At the beginning of an optimization run, a random initial ansatz |𝒲(0)⟩ is generated by setting the real and imaginary parts of all variational parameters independently to values obtained from a symmetric Gaussian distribution with standard deviation σ_𝒲=0.25. At each iteration l=1,…,L, the parameters are updated according to
𝒲(l) = 𝒲(l-1)-γ_L S^-1_λ(l-1)F(l-1),
where γ_L is the learning rate. Here, S_λ=S+λ I includes a diagonal shift λ, a regularization that helps with the matrix inversion <cit.>. The specific choice of λ is given in Sec. <ref>, below, along with an explanation of other hyperparameters. See Figure <ref> for a typical successful search for the ground state. The figure shows the estimated energy after each iteration with L=8000. This constitutes a single optimization run. At the end of each run, an accurate estimate is obtained for the energy of the final state (for l=L) and this is stored. In practice, several runs p are performed (p=1,2,…,P) and of those P runs, the run corresponding to the lowest final energy is selected. See Algorithm <ref>, including the procedure for finding excited states, which we now describe in detail.
§.§ Excited States
Most of the work on neural-network quantum states has focused on the ground state, but a few works have also found approximate excited states by projecting out the previously determined eigenstates <cit.>. In general, exactly projecting out the ground state (or any previously determined lower-energy states <cit.>) may introduce an exponential cost, associated with the cost of exactly calculating an overlap between the current trial state and the previously determined states. Here, as in the approach of Choo et al. <cit.>, we instead estimate the relevant overlaps with approximate Monte-Carlo sampling. However, rather than directly projecting out the previously calculated eigenstates, we adopt a penalty method <cit.> to calculate the excited states. In this method, the n^th excited state is calculated as the ground state of the modified Hamiltonian,
H_n = H + ∑_j=0^n-1β_j|𝒲^j⟩⟨𝒲^j|/⟨𝒲^j||𝒲^j⟩.
Here, the terms β_j>0 are penalty coefficients and |𝒲^j⟩ is the approximate j^th excited state, defined in terms of variational parameters 𝒲^j. The ground state is indexed by j=0. Explicitly, we replace the energy expectation value with the new loss function,
Ẽ_n(𝒲) =⟨𝒲|H|𝒲⟩/⟨𝒲||𝒲⟩+∑_j=0^n-1β_j⟨𝒲||𝒲^j⟩⟨𝒲^j||𝒲⟩/⟨𝒲||𝒲⟩⟨𝒲^j||𝒲^j⟩.
After optimization, Ẽ_n(𝒲) is the approximate energy of the n^th excited state of the original Hamiltonian H. The individual penalty coefficients β_j must be sufficiently large to ensure that the previously calculated eigenstates (j=0,1,…,n-1) are all raised in energy (in H_n) above E_n. However, very large values for β_j will be detrimental to optimization performance, a well-known phenomenon in penalty-based constrained optimization <cit.>. Here, our goal is to find a subset of energy eigenstates describing the low-frequency response of the Gaudin magnet to external perturbations. We therefore calculate eigenvalues only up to a maximum cutoff ω_max above the ground-state energy. For simplicity, in numerical evaluations we have therefore set all penalty coefficients to a j-independent value,
β_j = 2 ω_max.
For the specific spin-dynamics results presented below for the Gaudin magnet, we choose ω_max= 0.15 A/N_0, which is sufficient to characterize the first five levels with the chosen parameters: N=5, N_0=3, B=0.35A/N_0, and with the exponential distribution of coupling constants A_k given in Equation (<ref>).
The minimization strategy for excited states differs from that of the ground state, since the loss function is modified. While the definition of the quantum geometric tensor S_ik(p) remains unchanged, the gradient vector must be modified to (see the Appendix for details):
F̃_i^n = ∂Ẽ_̃ñ/∂𝒲_i^* = ⟨ E_local𝒪^†_i⟩_σ̃ - ⟨ E_local⟩_σ̃⟨𝒪^†_i⟩_σ̃
+ ∑_j=0^n-1β_j< Ψ_𝒲^j/Ψ_𝒲 - <Ψ_𝒲^j/Ψ_𝒲>_σ̃ >_σ̃⟨𝒪^†_i⟩_σ̃<Ψ_𝒲/Ψ_𝒲^j>_σ̃_j,
where the modified energy gradient F̃_i^n is evaluated approximately through Metropolis sampling. The samples {σ̃} are collected based on the distribution π(σ; 𝒲) [Equation (<ref>)], whereas {σ̃_j} are sampled accounting for the variational parameters 𝒲^j, according to π(σ; 𝒲^j). See Figure <ref> for a typical successful run resulting in the first excited state. In Figure <ref> we show a histogram of outcomes for the first five energy levels E_0,E_1,…,E_4 after 50 optimization runs.
§.§ Hyperparameters and Postselection
Here we summarize the chosen model hyperparameters. The learning rate was set to γ_L=0.02 for all runs. We perform Metropolis sampling with N_σ̃=5000 samples to estimate the energy loss function [E(𝒲) for the ground state, Ẽ_n(𝒲) for the n^th excited state], the gradient vector F_i, and the quantum geometric tensor S_ik. As described following Equation (<ref>), a diagonal shift λ = 0.01 is added to the quantum geometric tensor before inversion. The total number of iterations is fixed to L=8000.
To further decrease the error of the final approximate ground state, we use the following postselection strategy: 50 independent runs (each consisting of 8000 iterations) were performed with different random seeds, storing the sampled energy and the variational parameters from the last iteration of each run. We then used 10^4 Metropolis samples (twice as many samples as in the optimization runs) to obtain a more accurate estimate of the energy in each of the 50 runs. We then select the state with the lowest energy (out of the 50 states found from the 50 runs) to be the best approximate ground state. The postselection process for each excited state was the same, after replacing E(𝒲) (the estimated energy alone) with Ẽ_n(𝒲) (estimated energy plus penalty terms).
See Algorithm <ref> for a summary of the n^th excited state calculation with the number of independent runs P=50 and the number of iterations L=8000 for both the ground state and the excited states. With the chosen hyperparameters, we found good approximations for the lowest five eigenstates. By comparing to exact diagonalization, we find the error in each of the energy eigenvalues is less than 10^-3 A/N_0. For all five eigenstates, the state infidelity was bounded by 1-|⟨ψ_exact|ψ⟩|^2<2× 10^-3.
§.§ Time Scaling
The average runtime (averaged over 50 runs) is shown in Figure <ref> for a determination of the ground state of the Gaudin magnet for N=1 to N=11. The runtime for the RBM-based variational algorithm (green triangles) is compared to a brute-force exact diagonalization (blue triangles). Exact diagonalization was performed using the QR algorithm for Hermitian matrices, implemented in NumPy <cit.>. While the average runtime of brute-force exact diagonalization scales exponentially, the RBM-based variational algorithm with O(N^2) variational parameters (based on the RBM architecture shown in Figure <ref>) has the potential to scale polynomially. Figure <ref> suggests that the RBM-based variational algorithm will have a shorter runtime than brute-force exact diagonalization as the system size grows beyond N≳ 12. For this comparison, we have fixed the number of samples and the number of iterations. We cannot rule out the possibility that, as the system size grows, the number of samples and iterations required to reach a target error may increase exponentially.
As we show in Section <ref> below, the ground state of this problem can actually be found through a refined exact diagonalization procedure in a reduced subspace with cost O(N), since the lowest-energy state always lies within a manifold of at most one spin flipped. The comparison here to brute-force exact diagonalization of the entire Hamiltonian is only illustrative and likely not of practical relevance for this particular problem (finding the ground state). However, finding a general excited state (or collection of excited states to describe the system at finite temperature) will typically have exponential cost for direct numerical diagonalization.
§ DYNAMICAL RESPONSE
Given an accurate representation for both the low-lying spectrum of a many-body quantum system, as well as the low-lying eigenstates, dynamical response functions can be found. In the presence of a time-varying transverse field 𝐁_⊥(t)=B_x(t)x̂+B_y(t)ŷ acting on the central spin of the Gaudin magnet, the total Hamiltonian becomes
H_tot(t)=H+V(t),
with time-dependent perturbation
V(t) = B_x(t)S_0^x+B_y(t)S_0^y
= 1/2(B_-(t)S_0^++B_+(t)S_0^-).
Here, we have introduced
S_0^± = S_0^x± i S_0^y,
B_±(t) = B_x(t)± i B_y(t).
We assume an initial state ρ(t_0) that is stationary with respect to the Gaudin-magnet Hamiltonian H:
Hρ(t_0)=0.
This is true for a thermal state or any other statisitical mixture of H eigenstates. If the initial state is a mixture of non-degenerate Gaudin-magnet eigenstates, the linear response of <S_0^-(t)>=<S_0^x(t)>-i<S_0^y(t)> to 𝐁_⊥(t) is then
<S_0^-(t)>=-i/2∫_t_0^t dt' <Ŝ_0^-(t-t')S_0^+>B_-(t').
Here, S_0^-(t) evolves under the action of the total Hamiltonian H_tot(t), while Ŝ_0^-(t)=e^iHtS_0^- e^-iHt evolves in the interaction picture under H. The average is understood to be taken with respect to the initial state, <⋯>=Tr{⋯ρ(t_0)}. We have also used the following identities, valid for a mixture of non-degenerate H eigenstates:
<S_0^-(t_0)>=0,
and
<Ŝ^+_0(t)S^+_0>=<Ŝ^-_0(t)S^-_0>=0.
These identities follow directly from the fact that the Gaudin-magnet Hamiltonian commutes with the total z-component of spin:
HJ^z=0; J^z=∑_k=0^N S_k^z,
while the operators S_0^± couple sectors of different J^z.
Rewriting Equation (<ref>) in terms of Fourier transform variables and taking t_0→-∞ gives
<S_0^-(ω)>=1/2χ^-+(ω)B_-(ω),
where the transverse spin susceptiblity is
χ^-+(ω) = -i∫_-∞^∞ dt e^iω t-0^+ |t|<Ŝ_0^-(t)S_0^+>θ(t).
Here, θ(t) is the Heaviside step function and 0^+ is a positive infinitesimal. In physical applications, the Gaudin-magnet Hamiltonian will only be an approximate description. Corrections to this description (coupling to a continuum) will lead to a finite decay rate. We account for this effect with the phenomenological replacement
0^+→γ.
We now specialize to the case where the Gaudin magnet is prepared in a non-degenerate ground state, ρ(t_0)=|0><0|, where H|0⟩=E_0|0⟩. Expanding in a complete set of energy eigenstates then gives
χ^-+(ω) = ∑_j(|<j|S_0^+|0>|^2/ω-Δ_j+iγ-|<j|S_0^-|0>|^2/ω+Δ_j+iγ),
where the j^th excitation energy is
Δ_j=E_j-E_0.
The excitations that can be generated through central-spin spin flips will result in peaks of the central-spin spectral function,
𝒜_0(ω) = -2Imχ^-+(ω)
= 2π∑_j,s=±s|<j|S_0^s|0>|^2δ_γ(ω- sΔ_j),
where we have introduced a lineshape function
δ_γ(ω) = γ/π/ω^2+γ^2.
For γ→ 0^+, the lineshape function approaches a Dirac delta function, δ_γ(ω)→δ(ω). Positive-frequency (ω>0) contribuions to 𝒜_0(ω) arise from excitations generated through the action of S_0^+ on the ground state, while negative-frequency (ω<0) contributions arise from excitations produced through S_0^-.
Without loss of generality, we take B≥ 0 to define the +z direction of spin. A finite value, B 0, favors a ground state with the central spin pointing down, |⇓⟩. When the couplings are negative, A_k<0, the environment spins will then favor an orientation along the central spin in the ground state. This fully polarized state is an exact eigenstate of H, while for A_k>0, the ground state will have mixed character:
|0⟩ = |⇓↓↓⋯↓⟩ (A_k<0),
|0⟩ = β_0|⇓↑↑⋯⟩+∑_k=1^Nβ_k|ψ_k⟩ (A_k>0),
where
|ψ_k⟩≡|⇑↑↑⋯↓_k⋯↑⟩.
For A_k<0, we have
S_0^-|0⟩ = 0 (A_k<0),
S_0^+|0⟩ = |⇑↓↓⋯↓⟩ (A_k<0).
This leads to peaks in the spectral function only at positive frequencies ω>0 for A_k<0, with amplitudes determined by the overlaps <j|.⇑↓↓⋯↓>. In general, we will need detailed knowledge of the many-body eigenstates |j⟩ as well as the excitation energies Δ_j to accurately determine the spectral function.
For positive coupling, A_k>0, we have instead
S_0^-|0⟩ = ∑_k=1^Nβ_k|⇓↑↑⋯↓_k⋯↑⟩ (A_k>0),
S_0^+|0⟩ = β_0|⇑↑↑⋯↑⟩ (A_k>0).
In contrast with the case of A_k<0, the case of A_k>0 will lead to peaks at both negative and positive frequencies. Since the fully polarized state in Equation (<ref>) is an exact eigensate of H with eigenvalue (B+∑_k A_k/2)/2, there will be a single peak at ω≃ B+∑_k A_k/2>0 controlled by the amplitude β_0, while there will generally be a family of peaks at low negative frequencies ω≃ -Δ_j, with amplitudes controlled by ∑_kβ_k<j|.⇓↑↑⋯↓_k⋯↑> (see Figure <ref> for an example). In what follows, we focus on this case of A_k>0.
Based on the discussion here, we see that the zero-temperature central-spin spectral function can, in fact, be calculated numerically exactly with only a polynomial cost. For A_k<0, the problem amounts to calculating the excitation energies Δ_j and overlaps <j|.⇑↓↓⋯↓> after finding the eigenstates |j⟩ through exact diagonalization in the O(N)-dimensional subspace of one spin flipped. For the opposite case of A_k>0, in addition to calculating the ground state in the subspace of one spin flipped, the associated excitation energies Δ_j and overlaps <j|.⇓↑↑⋯↓_k⋯↑> can then be found through exact diagonalization in the O(N^2)-dimensional subspace of two spins flipped. In this (zero-temperature) limit, there is likely no practical need for a machine-learning (RBM) approach to dynamics. However, for a finite-temperature thermal state or for initial conditions corresponding to a non-equilibrium configuration described by many spin flips, an efficient alternative to exact diagonalization becomes important. With n flipped spins, the number of states in the subspace (N n∼ (N/n)^n for N≫ n≫ 1) grows exponentially with n. While exact-diagonalization necessarily becomes computationally expensive in this limit, the machine-learning (RBM) approach generalizes directly to different initial conditions and has the potential to scale favorably.
A possible alternative to the RBM method presented here is perturbation theory, but as we show in the following section, perturbation theory fails for this problem in the limit of small-to-moderate B (B≲ A).
§.§ Perturbation theory
For a sufficiently large B, it is possible to find perturbative approximations for the excitation energies and amplitudes, by writing H=H_0+V_ff with
H_0 = (B+∑_k A_k S_k^z)S_0^z,
V_ff = 1/2∑_kA_k(S_0^+S_k^-+S_0^-S_k^+).
An expansion in the flip-flop terms V_ff then gives:
|β_0|^2 = 1-∑_k=1^N|β_k|^2
≃ 1-∑_k A_k^2/(B+A/2)^2
≃ 1-A^2/2N_0(B+A/2)^2.
In the second line above, we have used ∑_k A_k≃ A for large N and in the third line we have used the specific coupling coefficients given in Equation (<ref>) to evaluate the sum, ∑_k A_k^2≃∫_1^∞ dk A_k^2=A^2/2N_0. This contribution to the spectral function will dominate (|β_0|^2≃ 1) and perturbation theory is valid for calculating this quantity for any non-negative B whenever N_0≫ 1. The parameter |β_0|^2 is the same quantity that gives rise to the long-time saturation in spin coherence describing non-Markovian partial coherence decay for a central-spin system when the interaction is introduced suddenly <cit.>. If only these high-frequency features were of interest, there would typically be no need for a non-perturbative approach (although higher-order corrections in V_ff will broaden this sharp feature <cit.>). The low-frequency contributions to the susceptibility driving the long-time dynamics have a much more stringent condition for perturbation theory to be valid. In particular, due to the possibility for constructive interference of leading-order contributions in perturbation theory, an accurate description of these terms generally requires <cit.>:
|∑_kβ_k|≃A/2/B+A/2≪ 1.
This condition is only satisfied for B≫ A/2, independent of N_0. Outside of this regime, this naïve perturbation theory will fail to accurately describe dynamics. Our focus is now on describing the low-frequency contributions with a non-perturbative method.
We emphasize that the transverse spin susceptibility χ^-+(ω) gives only the linear response of the central spin to a weak driving field (leading order in B_⊥), but an accurate description of the resulting spin dynamics for B≲ A requires a nonperturbative calculation to all orders in V_ff, even in this limit of a weak driving field.
§.§ Nonperturbative many-body spin susceptibility
In this section, we directly apply the RBM ansatz to the problem of finding the zero-temperature dynamical response. While brute-force exact diagonalization is likely to fail for a more general (finite-temperature or nonequilibrium) initial condition due to the exponential growth of the relevant Hilbert space, the machine-learning procedure presented here is directly generalizable to nontrivial initial conditions and has the potential to remain efficient in this regime. In this paper, we simply present a first example involving a small number (N=5) of environment spins and we assume a zero-temperature ground state. We leave it to future work to find scaling of the RBM procedure for more general initial conditions.
To find an accurate approximation for the nonperturbative spin susceptibility, we construct RBM representations of the lowest-energy j_max+1 eigenstates of H:
|j>≃|𝒲^j>/√(<𝒲^j|𝒲^j>), j=0,1,…,j_max,
where j_max is defined (for a fixed maximum frequency ω_max) by
Δ_j_max≤ω_max<Δ_j_max+1.
The relevant matrix elements required to find the susceptibility are then approximated through optimized RBM representations:
|<j|S_0^+|0>|^2≃<𝒲^0|S_0^-|𝒲^j><𝒲^j|S_0^+|𝒲^0>/<𝒲^0|𝒲^0><𝒲^j|𝒲^j>,
where we rewrite each of the quantum averages in terms of a weighted sum:
<𝒲^0|S_0^-|𝒲^j>/<𝒲^0|𝒲^0> = <∑_σ'<σ|S_0^-|σ'>Ψ_𝒲^j(σ')/Ψ_𝒲^0(σ)>_0,
<𝒲^j|S_0^+|𝒲^0>/<𝒲^j|𝒲^j> = <∑_σ'<σ|S_0^+|σ'>Ψ_𝒲^0(σ')/Ψ_𝒲^j(σ)>_j.
Here, we have introduced the notation
<f(σ)>_j=∑_σπ(σ,𝒲^j)f(σ)≃1/N_σ̃^j∑_σ̃^j f(σ̃^j),
where π(σ,𝒲^j) is given by Equation (<ref>). On the right-hand side of Equation (<ref>), we have approximated the average by a sample average over N_σ̃^j samples σ̃^j that are found in practice through Metropolis sampling.
Once we have the subset of approximate eigenstates in the form of an RBM, we use Metropolis sampling to estimate the matrix elements given in Equation (<ref>) and (<ref>). We used N_σ̃=5×10^6 samples for the estimate, resulting in a relative statistical error < 10^-2 for the relevant matrix elements. These approximate matrix elements were then combined with the estimated excitation energies Δ_j to construct the central-spin spectral function, Equation (<ref>). See Figure <ref> for a comparison of the approximate central-spin spectral function with the result from exact diagonalization. The exact solution accounts for all eigenstates, while the approximate spectral function was calculated using only the lowest five approximate eigenstates.
§.§ Linear Response
As an illustrative application of the linear-response calculation above, here we consider the real-time dynamics of the central spin in the presence of a slowly-varying transverse field. See Algorithm <ref> for the steps taken to perform this computation. The spin susceptibility defined in Equation (<ref>) directly gives this result. In particular, for a total Hamiltonian H_total = H + V(t) = H + B_y(t)· S_0^y, and for a t=0 initial ground state of H, the linear response is
⟨ S_0^x(t) ⟩ = ∫_0^t dt' χ^xy(t-t')B_y(t'),
where
χ^xy(t) = -i<[Ŝ^x(t),S^y]>
= -1/2Re<[Ŝ^-(t),S^+]>.
Finally, we choose B_y(t) to have nonvanishing spectral weight at two of the excitation frequencies (inset of Figure <ref>a)). For an illustrative example, we make the specific choice
B_y(t) = B_1 g(t-t̅,τ_1)cos(Δ_3 t)+B_2 g(t-t̅,τ_2),
where g(x,σ_x) is a normalized Gaussian envelope:
g(x,σ_x)=1/√(2π)σ_xe^-x^2/2σ_x^2.
Here, B_1 = B_2 = 5 A/N_0, t̅ = 200 N_0/A, τ_1 = 100 N_0/A and τ_2 = 50 N_0/A. The functional form of B_y(t) is shown in Figure <ref>a) and the resulting linear response <S^x(t)> is shown in Figure <ref>b). The neural-network (RBM) estimate (solid red curve) accurately reproduces the exact dynamics (black dashed curve) in this linear response regime. The exact result (black dashed line in Figure <ref>b)) was obtained by integrating the exact Schrödinger equation directly via the Python package Qutip <cit.>, without any linear-response assumption.
The amplitude of the field B_y(t) is deep in the linear-response regime, so we expect that deviations between the exact and approximate curves shown in Figure <ref>b) are primarily due to errors in the approximate variational (RBM) eigenstates and in the calculations of matrix elements through Metropolis sampling. There may also be some small correction due to the finite-frequency cutoff ω_max=0.15(A/N_0) taken in the approximate RBM calculation.
§ CONCLUSIONS
In this paper, we have applied a neural-network method to calculate approximate low-lying eigenstates and linear response dynamics of an integrable many-body Hamiltonian: the Gaudin magnet. Having an efficient solution for this many-body system could be an important tool in characterizing decoherence sources for qubits that interact with uncontrolled two-level systems. These systems include qubits based on spin, charge, flux, etc., interacting with nuclear spins, coherent charge traps, or paramagnetic impurities. The very same model (the Gaudin magnet) can be used to study the dynamics of nonequilibrium superconductivity since a linear combination of commuting Gaudin-magnet Hamiltonians can be used to represent the Richardson model (BCS Hamiltonian) that drives superconducting pairing dynamics.
This work was motivated in large part by the fact that the Gaudin magnet is a quantum integrable model, admitting a large number of conserved quantities. For such integrable models consisting of N particles, it is well known how to recast the system of O(e^N) linear equations arising from the Schrödinger equation in terms of O(N) nonlinear equations, via the algebraic Bethe ansatz. It is not generally known how to efficiently solve these nonlinear equations. This understanding has led us to seek a solution to this dynamics problem via a variational neural-network ansatz (the restricted Boltzmann machine). In contrast with brute-force exact diagonalization in the entire Hilbert space, which is guaranteed to have exponential cost, the neural-network approach taken here relies on a limited (polynomial) number of network parameters and approximate sampling from a subset of states. The neural-network approach therefore has the potential to be efficient. The tradeoff is that this variational neural-network approach has no guarantee of convergence or accuracy, but the first results presented here are promising in this respect.
We have presented a systematic procedure that could be used to explore the dynamics of the Gaudin magnet or other interesting models. We have further given some illustrative examples based on very small systems (one central spin and N=5 environment spins). In this analysis, we have not provided any guarantees of convergence, accuracy, or efficiency (although we have provided direct comparisons with exact results suggesting each of these can be achieved for small systems). In future work it will be important to explore these questions systematically by studying systems of progressively larger size and under conditions that push past the boundaries of what can be done via exact diagonalization. Further improvements can be made by adopting contemporary neural-network quantum state architectures such as autoregressive neural-network models <cit.>. In this architecture, the neural-network quantum state is normalized by construction and sampling can be done more efficiently.
It remains an interesting open question whether a variational machine-learning approach, as explored here, can easily “learn” the symmetries of a many-body problem and exploit these to find an efficient solution.
*
§ DERIVATION OF THE GRADIENT
In this Appendix, we provide a detailed derivation of the gradient expression from Equation (<ref>). The variational gradient used to find excited states is derived by differentiating the Monte-Carlo estimated energy including a penalty term (Equation (<ref>)). The derivative is evaluated with respect to the model parameter 𝒲. The first contribution in the gradient comes from the estimated energy alone and has been widely used in existing works <cit.>. The derivation here focuses on the second contribution to the gradient from the penalty term.
For simplicity, we assume that only one penalty term is present. The gradient becomes
∂/∂𝒲^*(β_j |z|^2/⟨𝒲||𝒲⟩⟨𝒲^j||𝒲^j⟩) =
∂/∂𝒲^* (β_j |z|^2 )/⟨𝒲||𝒲⟩⟨𝒲^j||𝒲^j⟩ + ∂/∂𝒲^*(1/⟨𝒲||𝒲⟩) β_j |z|^2/⟨𝒲^j||𝒲^j⟩.
Here, we have introduced the parameter z = ⟨𝒲||𝒲^j⟩ and the second term above arises from the normalization factors. The numerator of the first term can be expanded as
∂/∂𝒲^*(β_j |z|^2 ) = β_j ( ∂ z /∂𝒲^* z^* + ∂ z^* /∂𝒲^* z ) = β_j ∂ z^*/∂𝒲^* z
= β_j ( ∑_σΨ_𝒲^j(σ) ∂Ψ_𝒲 (σ)/∂𝒲^*) ( ∑_σΨ_𝒲^j^*(σ) Ψ_𝒲(σ) ),
where ∂ z^* /∂𝒲^* = 0 since the holomorphic function Ψ_𝒲 (σ) has the property ∂Ψ_𝒲 (σ) /∂𝒲^* = 0 <cit.>. Incorporating the normalization terms in the denominator, the first term becomes
β_j ⟨Ψ_𝒲^j/Ψ_𝒲𝒪^†_i ⟩_σ̃⟨Ψ_𝒲/Ψ_𝒲^j⟩_σ̃_j.
The second term in Equation (<ref>) can be expanded as
-β_j |z|^2/⟨𝒲^j||𝒲^j⟩⟨𝒲||𝒲⟩^2 ( ∑_σΨ_𝒲(σ) ∂Ψ_𝒲 (σ)/∂𝒲^*)
= - β_j ⟨Ψ_𝒲^j/Ψ_𝒲⟩_σ̃⟨𝒪^†_i ⟩_σ̃⟨Ψ_𝒲/Ψ_𝒲^j⟩_σ̃_j.
Summing up the two terms in Equation (<ref>) and (<ref>), and then generalizing to multiple penalty terms, we recover the gradient expression in Equation (<ref>). For the normalized neural-network quantum states in, e.g., Ref. <cit.>, the gradient expression would only contain the first term in Equation (<ref>).
We acknowledge funding from the Natural Sciences and Engineering Research Council (NSERC) and from the Fonds de Recherche–Nature et Technologies (FRQ–NT).
apsrev4-2
|
http://arxiv.org/abs/2307.01591v3
|
20230704092914
|
OrthoBoXY: A Simple Way to Compute True Self-Diffusion Coefficients from MD Simulations with Periodic Boundary Conditions Without Prior Knowledge of the Viscosity
|
[
"Johanna Busch",
"Dietmar Paschek"
] |
cond-mat.soft
|
[
"cond-mat.soft",
"cond-mat.stat-mech"
] |
Institut für Chemie, Abteilung Physikalische und Theoretische Chemie,
Universität Rostock, Albert-Einstein-Str. 27, D-18059 Rostock, Germany
[email protected]
Institut für Chemie, Abteilung Physikalische und Theoretische Chemie,
Universität Rostock, Albert-Einstein-Str. 27, D-18059 Rostock, Germany
Recently, an analytical expression for the system size dependence and direction-dependence of self-diffusion coefficients for neat liquids due to hydrodynamic interactions
has been derived for molecular dynamics (MD) simulations using orthorhombic unit cells.
Based on this description, we show that for systems with a “magic” box length ratio of L_z/L_x=L_z/L_y=2.7933596497 the computed self-diffusion coefficients D_x and D_y in x- and y-direction become system-size independent and represent the true self-diffusion coefficient D_0=(D_x+D_y)/2. Moreover, by using this particular box geometry, the viscosity can be determined with a reasonable degree of accuracy from the difference of components of the diffusion coefficients in x-,y- and z-direction using the simple expression η=k_BT· 8.1711245653/[3π L_z(D_x+D_y-2D_z)], where k_B denotes Boltzmann's constant, and T represents the temperature. MD simulations of TIP4P/2005 water for various system-sizes using both orthorhombic and cubic box geometries are used to test the approach.
OrthoBoXY: A Simple Way to Compute True Self-Diffusion Coefficients from
MD Simulations with Periodic Boundary Conditions Without Prior Knowledge of
the Viscosity
Dietmar Paschek
August 1, 2023 at
===================================================================================================================================================================
§ INTRODUCTION
Self-diffusion coefficients obtained
from from MD simulations with periodic boundary conditions (PBCs)
show a systematic system size dependence.<cit.>
This effect is
caused by the altered
hydrodynamic interactions
between particles in a periodic system.<cit.>
It has been demonstrated for
simulations of
polymers in solution <cit.>,
TIP3P model water molecules,
and Lennard-Jones particles <cit.>,
as well as CO_2, n-alkanes, and poly(ethylene glycol) dimethyl ethers
for a wide variety of conditions <cit.>.
An exact expression,
often referred to as Yeh-Hummer approach,
has been derived to
describe the effect for simulations with a cubic unit cell as <cit.>
D_0 = D_PBC+k_BTζ/6πη L ,
with the box size L, and the shear viscosity η.
Here, D_PBC is the self-diffusion coefficient obtained for
a system with PBCs, and D_0 is the self-diffusion coefficient obtained
for L→∞. The parameter ζ≈2.8372974795
is the analogue to a Madelung constant
<cit.>
of a cubic lattice, which can be
computed via Ewald summation <cit.> according to
ζ = -L·{ [∑_𝐧≠ 0erfc(α n)/n] +
π/V[∑_𝐤≠ 04 e^-k^2/(4α^2)/k^2]
-π/α^2V
-2α/√(π)}
where α is the Ewald convergence parameter.
The vectors
𝐧=(n_x,n_y,n_z),
and
𝐤=(k_x,k_y,k_z)
are the real and reciprocal lattice vectors with
n_i=L m_i and
k_i=2π· m_i/L
with m_i being integer numbers, and
n=|𝐧| and k^2=|𝐤|^2, respectively.
Equation <ref>
has been widely applied to determine the
system-size independent true self-diffusion coefficient
from MD simulations with PBCs.<cit.>
However, prior knowledge of the shear viscosity η is
required to perform the correction.
For orthorhombic box geometries, the presence of
unequal box-lengths leads to
different system-size dependencies for each of the components
D_ii of the diffusion
tensor 𝐃 such that
the self-diffusion tensor becomes anisotropic
even for an isotropic fluid.
To describe such a behavior,
Kikugawa et al. <cit.> have derived generalized
versions of Equations <ref> and <ref>,
which can be applied to systems with an
orthorhombic geometry using
D_0 = D_PBC,ii
+
k_BTζ_ii/6πη L_i
with i∈{x,y,z}. Here, L_i are the individual box-lengths
of the orthorhombic unit cell and
D_PBC,ii are the components of the
self-diffusion
tensor in the system with PBCs.
The ζ_ii represent the
direction-dependent Madelung constant analogues of the
orthorhombic lattice using
ζ_ii = -3/2 L_i·{ 1/2[∑_𝐧≠ 0erfc(α n)/n
+n_i^2/n^2(
erfc(α n)/n
+2α/√(π) e^-α^2n^2)
]
+π/V[∑_𝐤≠ 04 e^-k^2/(4α^2)/k^2
-k_i^2/α^2 k^2
e^-k^2/(4α^2)(
1+4α^2/k^2)
]
-π/α^2V
-α/√(π)}
with 𝐧=(n_x,n_y,n_z),
and
𝐤=(k_x,k_y,k_z)
being real and reciprocal lattice vectors with
n_i=L_i m_i and
k_i=2π· m_i/L_i,
based on integer numbers for m_i. Again, we use
n=|𝐧| and k^2=|𝐤|^2, while
α represents the Ewald convergence parameter.
Vögele and Hummer <cit.> have derived a similar expression
using Beenakker's expression for the Rotne-Prager tensor under PBCs.<cit.>
§ THE “ORTHOBOXY” METHOD
From Equation <ref> follows that
for a cubic unit cell, the obtained self-diffusion coefficients
D_PBC
are always smaller than the true self-diffusion coefficient D_0.
For orthorhombic unit cells, however, this does not necessarily need to
be the case.<cit.> In fact, for a unit
cell with L_x=L_y≠ L_z,
diffusion
in x- and y-direction can even become accelerated
for certain ratios L_z/L_x=L_z/L_y.<cit.>
Using Equation <ref>, we have determined the exact
ratio where this change in sign occurs: by numerically computing the Madelung constant
analogues
ζ_xx, ζ_yy, and ζ_zz
from Equation <ref>, we have
obtained, in accordance with the analysis of
Kikugawa et al. <cit.>,
the condition ζ_xx=ζ_yy=0 to
be related to
a box geometry with a “magic” box-length ratio of
L_z/L_x=L_z/L_y≈ 2.7933596497.
Since the computation has been performed numerically,
we have determined
ζ_xx=ζ_yy< 10^-10 using the box geometry indicated above.
For this geometry, we have also computed the Madelung constant analogue
in z-direction to be ζ_zz≈ 8.1711245653.
The computations of Equation <ref> and
Equation <ref>
discussed above
were performed using double precision floating point arithmetic,
and an Ewald convergence parameter of
α=L_x^-1=L_y^-1
for Equation <ref>
and
α=L^-1 for Equation <ref>,
with m_i ranging between
-m_max≤ m_i≤ m_max using
m_max=100
for both the real and reciprocal lattice summation,
ensuring that the calculations are converged.
Given that we have two unknowns, D_0 and η, and
three equations, it is always possible to determine
both D_0 and η from
direction-dependent diffusion coefficients obtained from
a single MD simulation run
based on an orthorhombic unit cell.
However, utilizing MD simulations of an orthorhombic box
with L_z/L_x=L_z/L_y≈ 2.7933596497 is
particularly intriguing, since now the
x- and y- component of the diffusion tensor become system-size
independent such that
D_PBC,xx=D_PBC,yy=D_0.
Note that for such a case a prior knowledge of the
shear viscosity is not required
for determining D_0, and
the self-diffusion coefficient for an infinitely large system
can be simply obtained via
D_0=D_PBC,xx+D_PBC,yy/2 .
In fact,
from Equation <ref> follows, that
for this case the
shear viscosity can also be computed directly
from the knowledge of the three components of
the diffusion tensor using
η= k_BTζ_zz/3π L_z
(D_PBC,xx+D_PBC,yy-2D_PBC,zz)
with
ζ_zz≈ 8.1711245653.
Moreover, Equation <ref> suggests that
it is perhaps beneficial
to employ particularly small system sizes for determining
η due to an increasing difference between
D_0 and D_PBC,zz with decreasing system size.
This approach might therefore offer the opportunity for
determining the viscosity and true self-diffusion coefficient from computationally
expensive calculations such as
ab initio MD simulations.
§ MOLECULAR DYNAMICS SIMULATIONS
To test the above outlined
OrthoBoXY
approach, MD simulations of
TIP4P/2005 model
water <cit.>
were carried out, which has been demonstrated to accurately describe the
properties of water compared to other
simple rigid nonpolarizable water models.<cit.>.
Simulations were performed
at a temperature of T=298
under NVT and NPT condititions, either
at a density of
ρ=0.9972 g cm^-3 (NVT),
or at a pressure of
P=1 (NPT).
Various system sizes are used for both
cubic and orthorhombic box geometries.
MD simulations of 10 ns length each were performed
using Gromacs 5.0.6.<cit.>
The integration time step for all simulations was 2.
The temperature of the simulated systems was controlled employing the
Nosé-Hoover thermostat <cit.>
with
a coupling time τ_T=1.0.
Constant pressure simulations were realized using
a Rahman-Parrinello-barostat
<cit.> employing τ _p = 2.0 and χ
_T=33·10^-6.
Both, the Lennard-Jones and electrostatic interactions were treated by smooth
particle mesh Ewald summation.<cit.>
The Ewald convergence parameter
was set to a relative accuracy of the Ewald sum of 10^-5
for the Coulomb-interaction and 10^-3 for the LJ-interaction.
All bond lengths were kept fixed during the simulation run and
distance constraints were solved by means of
the SETTLE procedure. <cit.>
The simulations were carried out in 20 subsequent segments of
500 length. All reported properties were
then calculated for
those segments separately in order to be able to estimate the
error
using standard statistical analysis procedures.<cit.>
§ RESULTS AND DISCUSSION
Self-diffusion coefficients were computed
from the slope of the center-of-mass mean square displacement of the
water molecules using the Einstein formula <cit.> according to
D_PBC
=1/6∂/∂ tlim_t→∞<
|𝐫(0)
-
𝐫(t)
|^2
> ,
and
D_PBC,ii
=1/2∂/∂ tlim_t→∞<
|r_i(0)
-
r_i(t)
|^2
> ,
where 𝐫(t)=[r_x(t),r_y(t),r_z(t)] represent the position of the center of mass
of a water molecule at time t and the r_i(t) are its respective components
in x-, y-, and z-direction.
All computed self-diffusion coefficients
shown Tables <ref> and
<ref> were
determined from the slope of the mean square
displacement of the water molecules
fitted to time intervals between 15 and 200.
Table <ref> contains results from MD simulations using
orthorhomic unit cells with
L_z/L_x=L_z/L_y≈ 2.7933596497
for system sizes between 768 and 6144 water molecules,
while Table <ref> contains the data
obtained for cubic unit cells
with system sizes between 256 and 2048 water molecules.
The diffusion coefficients D_0 obtained from the
simulations based on an
orthorhombic system, shown
in Figure <ref> and given
in Table <ref>, exhibit no systematic system size
dependence. Here the average over the different system sizes
is determined to be
D_0=(2.277±0.013)× 10^-9 m^2s^-1.
As shown in Figure <ref>,
the computed self-diffusion coefficients in
z-direction D_PBC,zz, however, show a strong system
size dependence. From the knowledge of D_0 and
D_PBC,zz we compute the shear viscosity
η.
The computed viscosities for all systems considered are
shown in Table <ref>. No
systematic system size dependence is observed,
leading to an average value
of η=(0.900±0.051)
for the viscosity of TIP4P/2005 water at T=298
when
averaging over all systems.
Note that the computed errors
of η also do not show any systematic variation
with the system size although the accuracy
of the computed self-diffusion coefficients decreases with decreasing
system size. Possibly, the increasing difference between D_0 and
D_PBC,zz with decreasing system size
is compensating for this loss of accuracy, as anticipated earlier.
The computed average viscosity is close to the experimental
value for water of 0.8928
at 298 given by Harris and Woolf.<cit.>
It is, however, slightly larger than the viscosity value of
0.855 reported by Gonzáles and Abascal
<cit.>,
and the value of 0.83±0.07 reported by
Tazi et al. <cit.> for the TIP4P/2005 model.
We would like to point out that this slightly
enhanced viscosity might be related to the fact
that we applied the
PME summation for both the Lennard-Jones interactions
and the Coulomb interactions in our simulations.
Note that the enhanced viscosity is accompanied by a
similarly reduced diffusivity: when scaling
the diffusion coefficient of
(2.49± 0.06)× 10^-9 m^2s^-1, reported
by Tazi et al. <cit.>
(which was also determined by applying the Yeh-Hummer correction) by a
factor of 0.83/0.90, we end up with a diffusion coefficient
of 2.296 × 10^-9 m^2s^-1,
which
matches very well the diffusion coefficient determined here.
Both values are lying close to the experimental value
of 2.3× 10^-9 m^2s^-1
at 298.<cit.>
The computed viscosities shown in Table <ref>
are estimated with a relative accuracy between 10 % and
14 %, which is not a particularly impressive.
However, it is
comparable to the accuracy which is available via the integration
over the stress-tensor auto-correlation function reported
by Tazi et al. .<cit.>
The diffusion coefficients D_PBC obtained for the cubic systems shown
in Table <ref> exhibit
the familiar system size dependence <cit.> and are corrected according
to Equation <ref>
using the average shear viscosity of
η=(0.900±0.051) discussed above.
Again, the computed D_0 show no
systematic system size dependence and are leading to an
average value of
D_0=(2.279±0.010)× 10^-9 m^2s^-1, which is
consistent with our simulations employing orthorhombic unit cells.
To test whether the outlined procedure is also applicable to MD simulations
performed under NPT conditions, we have conducted an additional constant
pressure simulation of an orthorhombic system
using the “magic” box-length ratio of
L_z/L_x=L_z/L_y≈ 2.7933596497
for a system-size of N=3072 water molecules, as shown in Table
<ref>. Here, we have applied an equal scaling of the box-lengths
in the Rahman-Parrinello barostat to keep the box-length ratio fixed.
The computed diffusion coefficient
D_0=(2.290 ± 0.030)× 10^-9 m^2s^-1
and viscosity of
η=(0.884± 0.124) fall well within the range
of data computed from NVT simulations.
§ CONCLUSION
In conclusion, we would like to point out that with
the proposed OrthoBoXY approach of using
an orthorhombic system
with a “magic” box-length ratio of
L_z/L_x=L_z/L_y≈ 2.7933596497,
we are able to determine the true (i.e. system size independent)
self-diffusion coefficient D_0 for TIP4P/2005 water without prior knowledge
of the shear viscosity
from a single MD simulation run
by doing nothing more than just employing a particularly odd shaped
simulation box. The computed values for D_0 agree with the values
determined from MD simulations employing cubic unit cells by applying the
widely used Yeh-Hummer correction.
In addition, from the analysis of the diffusion coefficients it is also possible
to derive the shear viscosity with an accuracy,
comparable to the accuracy which is achieved via the integration
over the stress-tensor auto-correlation function.
Both, the computed self-diffusion coefficient and shear viscosity
agree nearly quantitatively with the experimentally observed
data for water at 298.
§ ACKNOWLEDGEMENTS
We thank the computer center at the University of
Rostock (ITMZ) for providing and maintaining computational resources. The authors thank
J.K. Philipp for proofreading the manuscript.
§ DATA AVAILABILITY STATEMENT
The code of
https://www.gromacs.orgGROMACS is freely available.
Input parameter and topology files for the MD simulations and the
code for computing the Madelung constant analogues for cubic and orthorhombic
lattices can be downloaded from GitHub via
https://github.com/Paschek-Lab/OrthoBoXY/https://github.com/Paschek-Lab/OrthoBoXY/
10
duenweg_1993
B. Dünweg and K. Kremer.
Molecular dynamics simulation of a polymer chain in solution.
J. Chem. Phys., 99:6983–6997, 1993.
yeh_2004
I.-C. Yeh and G. Hummer.
System-size dependence of diffusion coefficients and viscosities from
molecular dynamics simulations with periodic boundary conditions.
J. Phys. Chem. B, 108:15873–15879, 2004.
kikugawa_2015
G. Kikugawa, T. Nakano, and T. Ohara.
Hydrodynamic consideration of the finite size effect on the
self-diffusion coefficient in a periodic rectangular parallelepiped system.
J. Chem. Phys., 143:024507, 2015.
voegele_2016
M. Vögele and G. Hummer.
Divergent diffusion coefficients in simulations of fluids and lipid
membranes.
J. Phys. Chem. B, 120:8722–8732, 2016.
moultos_2016
O. A. Moultos, Y. Zhanf, I. O. Tsimpanogiannis, I. G. Economou, and E. J.
Maginn.
System-size corrections for self-diffusion coefficients calculated
from molecular dynamics simulations: The case of CO_2, n-alkanes, and
poly(ethylene glycol) dimethyl ethers.
J. Chem. Phys., 145:074109, 2016.
beenacker_1986
C. W. J. Beenakker.
Ewald sum of the Rotne-Prager tensor.
J. Chem. Phys., 85:1581–1582, 1986.
hasimoto_1959
H. Hasimoto.
On the periodic fundamental solutions of the stokes equations and
their application to viscous flow past a cubic array of spheres.
J. Fluid Mech., 5:317–328, 1959.
Maginn_2019
E. J. Maginn, R. A. Messerly, D. J. Carlsson, D. R. Roe, and J. R. Elliott.
Best practicesfor computing transport properties 1.self-diffusivity
and viscosity from equilibrium molecular dynamics [articlev1.0].
Living. Comp. Mol. Sci., 1:6324, 2019.
kikugawa_2015a
G. Kikugawa, S. Ando, J. Suzuki, Y. Naruke, T. Nakano, and T. Ohara.
Effect of the computational domain size and shape on the
self-diffusion coefficient in a Lennard-Jones liquid.
J. Chem. Phys., 142:024503, 2015.
abascal_2005
J. L. F. Abascal and C. Vega.
A general purpose model for the condensed phases of water:
TIP4P/2005.
J. Chem. Phys., 123:234505, 2005.
vega_2011
C. Vega and J. L. F. Abascal.
Simulating water with rigid non-polarizable models: a general
perspective.
Phys. Chem. Chem. Phys., 13:19633–19688, 2011.
gromacs4
D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark, and H. J. C.
Berendsen.
GROMACS: fast, flexible, and free.
J. Comput. Chem., 26(16):1701–1718, 2005.
gromacs3
B. Hess, C. Kutzner, D. van der Spoel, and E. Lindahl.
Gromacs 4: algorithms for highly efficient, load-balanced, and
scalable molecular simulation.
J. Chem. Theory Comput., 4(3):435–447, 2008.
Nose:1984
S.Nosé.
A molecular dynamics method for simulations in the canonical
ensemble.
Mol. Phys., 52:255–268, 1984.
Hoover:1985
W. G. Hoover.
Canonical dynamics: Equilibrium phase-space distributions.
Phys. Rev. A, 31:1695–1697, 1985.
Parrinello:1981
M. Parrinello and A. Rahman.
Polymorphic transitions in single crystals: A new molecular dynamics
method.
J. Appl. Phys., 52:7182–7190, 1981.
Nose:1983
S. Nosé and M. L. Klein.
Constant pressure molecular dynamics for molecular systems.
Mol. Phys., 50:1055–1076, 1983.
Essmann:1995
U. Essmann, L. Petera, M. Berkowitz, T. Darden, H. Lee, and L. Pedersen.
A smooth particle mesh ewald method.
J. Chem. Phys., 103:8577–8593, 1995.
wennberg_2013
C. L. Wennberg, T. Murtola, B. Hess, and E. Lindahl.
Lennard-Jones lattice summation in bilayer simulations has
critical effects on surface tension and lipid properties.
J. Chem. Theory Comput., 9:3527–3537, 2013.
wennberg_2015
C. L. Wennberg, T. Murtola, S. Páll, M. J. Abraham, B. Hess, and E. Lindahl.
Direct-space corrections enable fast and accurate
Lorentz-Berthelot combination rule Lennard-Jones lattice summation.
J. Chem. Theory Comput., 11:5737–5746, 2015.
miyamoto_1992
S. Miyamoto and P. A. Kollman.
Settle: An analytical version of the shake and rattle algorithm for
rigid water models.
J. Comput. Chem., 13:952–962, 1992.
allentildesley
M. P. Allen and D. J. Tildesley.
Computer Simulation of Liquids.
Oxford University Press, Clarendon, Oxford, 1987.
numrecipes
W. H. Press, S. A. Teukolsky, W. T. Vetterling, and P. Flannery.
Numerical Recipes in C: The Art of Scientific Computing.
Cambridge University Press, Cambridge, USA, 2 edition, 1992.
harris_2004
K. R. Harris and L. A. Woolf.
Temperature and volume dependence of the viscosity of water and heavy
water at low temperatures.
J. Chem. Eng. Data, 45:1064–1069, 2004.
harris_2004c
K. R. Harris and L. A. Woolf.
Correction: Temperature and volume dependence of the viscosity of
water and heavy water at low temperatures.
J. Chem. Eng. Data, 45:1851, 2004.
gonzales_2010
M.A. Gonzáles and J.L.F. Abascal.
The shear viscosity of rigid water models.
J. Chem. Phys., 132:096101, 2010.
tazi_2012
S. Tazi, A. Botan, M. Salanne, V. Marry, P. Turq, and B. Rotenberg.
Diffusion coefficient and shear viscosity of rigid water models.
J. Phys.: Condens. Matter, 24:284117, 2012.
krynicki_1978
K. Krynicki, C. D. Green, and D. W. Sayer.
Pressure and temperature dependence of self-diffusion in water.
Faraday Discuss. Chem. Soc., 66:199–208, 1978.
|
http://arxiv.org/abs/2307.01160v1
|
20230703171027
|
Optimized experimental optical tomography of quantum states of room-temperature alkali-metal vapor
|
[
"Marek Kopciuch",
"Magdalena Smolis",
"Adam Miranowicz",
"Szymon Pustelny"
] |
quant-ph
|
[
"quant-ph",
"physics.atom-ph"
] |
1Doctoral School of Exact and Natural Sciences, Jagiellonian University, Faculty of Physics, Astronomy and Applied Computer Sciences, Łojasiewicza 11, 30-348 Kraków, Poland
2Institute of Physics, Jagiellonian University in Kraków, Łojasiewicza 11, 30-348 Kraków, Poland
3Institute of Spintronics and Quantum Information, Faculty of Physics, Adam Mickiewicz University, 61-614 Poznań, Poland
*[email protected]
**[email protected]
We demonstrate a novel experimental technique for quantum-state tomography of the collective density matrix. It is based on measurements of the polarization of light, traversing the atomic vapor. To assess the technique's robustness against errors, experimental investigations are supported with numerical simulations. This not only allows to determine the fidelity of the reconstruction, but also to analyze the quality of the reconstruction for specific experimental parameters (light tuning and number of measurements). By utilizing the so-called conditional number, we demonstrate that the reconstruction can be optimized for a specific tuning of the system parameters, and further improvement is possible by selective repetition of the measurements. Our results underscore the potential high-fidelity quantum-state reconstruction while optimizing measurement resources.
§ INTRODUCTION
Quantum technology is built on the precise manipulation and reconstruction of quantum states. When dealing with single microscopic quantum objects, the reconstruction of states becomes challenging. This stems from a (often) destructive nature of the reconstruction and small amplitudes of recorded signals. To address these difficulties, some researchers have turned their focus towards studying ensembles of quantum objects, which display a collective quantum behavior <cit.>.
Atomic vapors serve as a prime example of a medium utilized for the engineering of collective quantum states. In their ultracold form, they allow for precise quantum control through light and other external fields, albeit implementation of the control requires complex experimental setups. On the other hand, room-temperature vapors can be studied using simpler apparatuses, but they simultaneously present challenges in terms of theoretical understanding <cit.>. Despite these problems, however, the room-temperature atomic vapors were used to demonstrate various quantum-mechanical effects including coherent population trapping <cit.>, spin squeezing <cit.>, macroscopic entanglement <cit.>, spin waves <cit.>, squeezed light generation <cit.> and entanglement of light modes<cit.>. Rubidium vapor was also used to construct an on-demand quantum memory <cit.>. These experiments revived the interest in such media, while also necessitated the development of reliable quantum-state tomography (QST) methods.
In this work, we demonstrate the first experimental implementation of recently proposed QST method <cit.>. The method enables the reconstruction of a collective density matrix of a room-temperature atomic vapor and is based on the illumination of the vapor with an off-resonant probing light and monitoring properties of the light after traversing a medium subjected to an external magnetic field. This enables to reconstruct a collective quantum state of ^87Rb atoms residing in the F=1 ground state (qutrit).
To evaluate the efficiency of the tomographic technique, we used the so-called conditional number <cit.>. Previously, the parameter was used for a comprehensive comparison of tomographic methods of two polarization qubits <cit.>, NMR tomography of two ^1H spins-1/2 (two qubits) <cit.>, and a single nuclear spin-3/2 (a quartit) in a semiconductor quantum well <cit.>. We demonstrate that by an appropriate tuning of the probing light, the conditional number can be minimaized (corresponding to an optimized reconstruction) and as small as 2.25 can be achieved. We also discuss means of further improvement of the reconstruction efficiency by repeating specific measurements.
§ PRINCIPLES OF THE OPTICAL TOMOGRAPHY
We begin with a brief overview of the QST technique developed in Ref. <cit.>. This method relies on measuring the polarization rotation of linearly polarized probe light traversing a medium (e.g., room-temperature alkali metal atoms) subjected to a longitudinal magnetic field. We assume that the amplitude of the light is low, which allows us to describe its interaction with atoms using perturbation theory at the lowest order (linear interaction). At the same time, unlike previous approaches (see, e.g. Ref. <cit.>), we do not assume a significant detuning of the light from the optical transition. This enables us to consider not only the vector contributions to a polarization rotation <cit.>, but also tensor one <cit.>, and hence reconstruct the collective density matrix of the atoms. It is noteworthy that this reconstruction is achieved without full control over the system, as successive magnetic sublevels are equally splitted due to a weak magnetic field (under the conditions of the linear Zeeman effect) <cit.>.
In Ref. <cit.>, the relation between time-dependent polarization rotation δα(t) and operators α̂_R,I and β̂ was introduced. The operators are associated with coherences and population difference of specific magnetic sublevels hence provide access to specific density-matrix elements. In this work, we employ a slightly modified version of that relationship, i.e.,
δα(t;Δ) = η (Δ) ( e^-γ_1 t[ α̂_Rsin (2Ω_L t)
+ α̂_Icos(2 Ω_L t) ] - ζ(Δ) e^-γ_2 tβ̂),
where η(Δ)=χ V_R(Δ) and ζ(Δ)=V_I(Δ)/V_R(Δ) are the so-called global and local scaling factors associated with real V_R and imaginary V_I parts of the Voigt profile, and χ is related to experimental parameters such as atomic density and transition frequency (for more details see the Supplemental Information – SI). As shown in Eq. (<ref>), the time dependence of the polarization rotation is determined by the Larmor frequency Ω_L and the relaxation rates γ_1 and γ_2.
Since a single measurement described by Eq. (<ref>) allows us to extract only limited information about the system (specifically the population difference and coherence between magnetic sublevels with Δ m_F = 2) it is necessary to expand the set of measured signals to obtain a more comprehensive information. To achieve this, we introduce a series of unitary operations known as control pulses, which systematically manipulate a given state in the Hilbert space. This provides the access to other density-matrix elements and hence offer a complete characterization of the system <cit.>. In turn, the reconstruction problem can be presented as
𝕆ρ_V = b⃗,
where 𝕆 represents the coefficient matrix determined by the set of observables, and ρ_V = [ ρ_1̅1̅^R, ρ_1̅0^R, ρ_1̅0^I, …]^T (where ρ_mn^R = Re{ρ_mn}, ρ_mn^I = Im{ρ_mn} and 1̅ = -1) is the vectorized form of a standard form density matrix ρ with entrances ρ_ij (see SI for more information), b⃗ is the observation vector and contains the values of the measured values of the observables. In a typical experimental scenario, the set of measurements given in Eq. (<ref>) is often overdetermined, and it is advantageous to rescale it to a more suitable form
ℂρ_V = b̃⃗̃,
where ℂ = 𝕆^†𝕆 and b̃⃗̃ = 𝕆^†b⃗. This rescaling enables the calculation of the density operator by simply inverting the aforementioned linear problem.
§ EXPERIMENTAL DETAILS
§.§ Experimental setup
The heart of our experimental system is a 3 cm diameter paraffin-coated spherical cell, containing an istopically enriched sample of ^87Rb atoms. The cell is heated up to 50^∘C and is placed inside a cylindrical magnetic shield made of three layers of mumetal and a qubic innermost ferrite layer. Apart from the cell, the shield additionally contains a set of magnetic-field coils, which enables residual-field compensation and generation of magnetic-field pulses in the x⃗, y⃗, and z⃗ directions. Light used for the illumination of the rubidium atoms is provided by three diode lasers, where the pump and probe lasers are distributed-feedback lasers (DFBs), and the repump laser is the Fabry-Perot laser (ECDL). All lasers are independently tuned, and the repump laser wavelength is frequency-stabilized using a Dichroic Atomic Vapor Laser Lock (DAVLL) <cit.>. The wavelengths of the other two lasers are passively maintained due to their inherent temporal stability. Performance of all lasers is monitored using a wavemeter, while the pump and probe lasers are additionally monitored through saturated absorption spectroscopy (SAS). The intensities of the laser beams are dynamically controlled by three acusto-optical modulators (AOMs). To generate a specific quantum state in the vapor, the pump-light polarization is set by polarizers (POLs) and quarter-wave plates (λ/4), while the repump light is linearly polarized orthogonal to the pump-light propagation direction (i.e., along y⃗-axis). To determine the local scaling factor (see discussion below), the intensity of the probe light is monitored and its y⃗ linear polarization prior to the shield is provided by a Glan-Thomson polarizer. Finally, the polarization rotation of the probe light is measured after the cell using a balanced polarimater consisting of a Wollastron prism (WOL) and a balanced photodetector (BPD). The schematic of the setup is shown in Fig. <ref>(a).
§.§ Experimental sequence
The experimental sequence utilized in our measurements is shown in Fig. <ref>(b). The sequence begins with a pumping period during which a specific quantum state is engineered. This stage typically consists of a 200 ms light pulse (optical pumping), which is applied simultaneously with the repumping that prevent the atoms from escaping into the dark (F=2) state, followed by a few short (≈100 μs) magnetic-field pulses, enabling generation of a desired complex state. Subsequently, a series of magnetic-field pulses is used to mitigate technical problems (see Sec. <ref>), which is followed by a set of control pulses. Once the pulses are completed, a constant magnetic field along the z⃗-direction, ranging from 10-100 nT, is established. At the same time, a probe light beam, propagating along z⃗ with intensity of 1-10 μW/cm^2, is turned on. In order to improve the signal-to-noise ratio, the intensity of the probe light is modulated at a frequency of 200 kHz and the polarimeter signal is detected using a lock–in amplifier.
§.§ Global and local scaling factor
An important element of the reconstruction of the density matrix is the determination of the global scaling factor η(Δ) [see Eq. (<ref>)]. This can be done by measuring the light absorption in an unpolarized vapor. Using the absorption relationship derived in the SI, the factor can be identified by comparing the absorption of the probe light, tuned to the same wavelength as that during the tomography measurements (i.e., blue-detuned from f=1→ F=2 by 50–400 MHz), with the absorption of far-detuned light (>15 GHz).
η(Δ) = 2716( √(U_2(Δ)/U_1(Δ)/U_2(∞)/U_1(∞))- 1 ),
where U_1 is the voltage measured at the transimpedance photodetector placed in front of and U_2 after the medium (see Fig. <ref>(a) and the SI for more details) with Δ indicating the probe light tuned for QST and ∞ far-detuned light.
Experimental determination of the local scaling factor ζ(Δ) [see Eq. (<ref>)] presents a greater challenge. It requires preparation of an anisotropic, yet well-defined quantum state. In this work, we select “stretched” states that are generated along the x⃗- and z⃗-axes. The first state can be created by illuminating the atoms with a circularly polarized pump light propagating along the x⃗-axis. The preparation of the second state is more involved and requires the application of an additional magnetic-field pulse after the pumping, which rotates the atomic x⃗-polarization to the z⃗-direction (we have experimentally verified that this process did not introduce dephasing, as evidenced by the unchanged signal amplitude for a many-π pulse). Employing this procedure allows us to mitigate potential systematic errors arising from varying polarization levels achieved with the pump light propagating along different directions, while simultaneously simplifying the experimental setup. The formulas for the light polarization rotation corresponding to these two states are (see the SI for more details)
δα^(z) (t;Δ) = -5(1-ϵ)24η(Δ) ζ (Δ) e^-γ_2 t,
δα^(x) (t;Δ) = -(1-ϵ)48η(Δ) e^-γ_1 tcos (2 Ω_L t),
where ϵ is the remaining isotropic part of state. This allows one to calculate the local scaling factor
ζ(Δ) = 110δα^(z)(0; Δ)δα^(x)(0; Δ).
§.§ CYCLOPS-like measurement
Equation (<ref>) shows that our reconstruction method is sensitive to the initial phase of the measured signal. As uncontrollable phase delays are present in every experiment, the identification of the quadrature components of the signal becomes difficult. To address this issue, we adapt the CYCLically Ordered Phase Sequence () method, commonly utilized in nuclear magnetic resonance experiments <cit.>. In our approach, we leverage the fact that the π-rotation of the state around the y⃗-axis leads to a sign reversal of α̂_I and β̂ (for more information, see the SI). At the same time, by applying the pulse rotating the state by π/2 around the z⃗-axis and next the pulse rotating the state around the y⃗-axis by π (see the SI) the signs of the α̂_R and β̂ are reversed. By subtracting these two transformed states from the initial signal, we obtain
( δα - δα^(Y)) (t; Δ) = 2 η (Δ) [ - ζ(Δ) e^γ_2 tβ̂+e^-γ_1 tα̂_Icos(2 Ω_L t + φ) ],
( δα - δα^(ZY)) (t; Δ) = 2 η (Δ) [ - ζ(Δ) e^γ_2 tβ̂ + e^-γ_1 tα̂_Rsin(2 Ω_L t + φ) ],
where φ is an unknown phase shift originating from the experimental apparatus. In our CYCLOPS-like measurements, the problem of unknown phase is alleviated, as the final signals [see Eqs. (<ref>)] depend only on one quadrature (via either sine or cosine time dependence) and, thus, φ becomes insignificant. The procedure also allows us to remove systematic shifts of the signals associated with the imbalance of the polarimeter (for more details, see the SI).
§ RECONSTRUCTION OF STATES
To perform QST, we conducted the above-described nine measurements, consisting of three sets of CYCLOPS-like pulses for each of three control pulses. To ensure the self-consistency of our reconstruction procedure, we simultaneously fit all of the polarization-rotation signals with shared parameters such as the global phase, relaxation rates, and oscillation frequency. The fitting values are then used to determine the observables and reconstruct the qutrit density-matrix elements using the linear inversion method given in Eq. (<ref>). However, as this method does not guarantee the reconstructed matrices to be positive semidefinite, we utilize the maximum likelihood method with the Euclidean norm <cit.> to find the closest physical realization of the reconstructed matrix.
To validate our tomography technique, we compare the reconstructed density matrices with numerical simulations of the state obtained during the pumping stage. For the simulations, we assume the interaction of an appropriately polarized light with a Doppler-broadened medium consisting of atoms of the energy-level structure similar to that of the D_1 line in ^87Rb. As in the real experiment, we assume that there are two distinct regions between which the atoms can freely move. In the first region, the atoms evolve in a homogeneous magnetic field and relax to thermal equilibrium due to the collisions with vapor-cell walls and between one another. This corresponds to the atoms residing outside of the light beams. In the second region, the atoms still interact with the magnetic field but also with the pump and repump light. Moreover, we neglect the wall relaxation in this region. The latter region corresponds to the atoms inside the light beams. All parameters used in the simulations match the parameters of our experimental setup.
As representative examples for our reconstruction, we consider two states that can be easily generated experimentally and simulated theoretically. The first state can be pumped with a strong, circularly polarized pumping light, propagating along the x⃗-axis [Fig. <ref>(a)]. The state has a nonuniform population distribution and its all coherences are nonzero. This allows us to demonstrate that our method can reconstruct not only different coherences but also determine their amplitudes and phases with a high accuracy. The results of the experimental reconstruction and simulations are presented in Fig. <ref>(a). As seen, the results are in a very good agreement revealing a reconstruction fidelity of 0.995. As the second example, we considered a state pumped with the π-polarized light, propagating along the x⃗-axis. In the ideal case (without experimental artefacts), this scheme leads to the total depletion of the m_F=± 1 states and no coherences between any sublevels. As shown in Fig. <ref>(b), our measurements demonstrate a good agreement with numerical simulations, demonstrating the fidelity of 0.998. Nonetheless one can notice that a very small amplitude of the coherences can lead to the deterioration of the phase reconstruction. The very high quality of the reconstruction of these two representative states demonstrates the usefulness of our QST technique.
§ CONDITIONING AND OPTIMIZATION OF QUANTUM STATE TOMOGRAPHY
§.§ Condition number in linear inversion
As mentioned above, the condition number κ is a useful parameter to evaluate the reliability of a QST method [see Eq. (<ref>)]. Specifically, to quantify the ability to tolerate errors or sensitivity to the errors, we use the condition number of a (nonsingular) matrix ℂ, which, assuming the spectral norm …_2, can be defined as <cit.>
κ(ℂ) = ‖ℂ‖_2 ‖ℂ^-1‖_2 =max[ svd(ℂ)]max[ svd(ℂ^-1)] =max[ svd (ℂ)]min[ svd(ℂ)]≥ 1,
where svd(ℂ) denotes the singular values of ℂ. The significance of this error-robustness parameter explains well the Gastinel-Kahan theorem <cit.>, which states that a relative distance of a nonsingular square matrix ℂ from the set of singular matrices corresponds to the inverse of a condition number. Utilizing the error δb̃⃗̃ in the observation vector b̃⃗̃ and the condition number κ(ℂ), one can estimate the error δρ_V in the reconstructed density matrix ρ_V from the so-called Atkinson inequalities <cit.>
1κ(ℂ)‖δb̃⃗̃‖‖b̃⃗̃‖≤‖δρ_V‖‖ρ_V‖≤κ(ℂ) ‖δb̃⃗̃‖‖b̃⃗̃‖.
When the condition number approaches 1, it becomes apparent that small relative variations in the observation vector b̃⃗̃ result in correspondingly small relative changes in the reconstructed state ρ_V. In order to account for errors δℂ present in the coefficient matrix ℂ, these inequalities can be expanded according to the formulation derived in Ref. <cit.>, giving rise to the expression
‖δρ_V‖‖ρ_V‖≤κ (ℂ)1- κ (ℂ)
‖δℂ‖/‖ℂ‖[‖δb̃⃗̃‖‖b̃⃗̃‖+
‖δℂ‖‖ℂ‖].
By referring to the inequalities in Eqs. (<ref>) and (<ref>), we can infer that the quality of a QST method, in terms of its error sensitivity or robustness, can be assessed through its condition number κ(ℂ), which characterizes the degree to which small (large) changes in the observation vector b̃⃗̃ lead to relatively small (large) changes in the reconstructed state ρ_V. Thus, if κ(ℂ) is small (large), the QST method is well-conditioned (ill-conditioned), indicating the robustness (sensitivity) of the method to errors in the observation vector b̃⃗̃. In the case of ill-conditioned QST, even slight errors in b̃⃗̃ can cause significant errors in the reconstructed ρ_V. In short, the smaller the condition number the stronger robustness of a given linear-inversion-based QST method against errors. Thus, one can refer to an optimal method in this respect if κ(ℂ)=1. Numerical examples of ill-conditioned QST problems can be found in Refs. <cit.>.
§.§ Optimization via probe light tuning
In order to optimize a QST process, it is desired to make the coefficient matrix ℂ more isotropic, which means that each measurement brings an equal amount of information about the system. A simple example of such an optimized problem is when each measurement brings information about only a specific density-matrix element, with all measurements having the same weight <cit.>. In this case, the coefficient matrix ℂ is proportional to identity. Even though such optimization is intuitive, it is often unpractical, as experimental transformations required to achieve a desired scheme are very complex. Instead, here we propose a scheme, where a single experimental parameter is adjusted. In our case, this parameter is the probing light detuning, which, incorporated in Eq. (<ref>) through ζ(Δ), makes one of the observable detuning dependent.
It is important to note that our method does not guarantee an optimal tomography process, κ(ℂ) = 1. Therefore, to explore the limit of the method, we calculate the eigenvalues of coefficient matrix with ζ(Δ) as a free parameter. In our case, the eigenvalues of ℂ can be analytically calculated, taking the values: {1100,1150,1225,1225,1225,ζ^218,ζ^29,ζ^29}. From this, we obtain the dependence of κ(ℂ) on ζ(Δ) [Fig. <ref>(a)] and a minimal possible conditional number of 2.25 is determined.
To further illustrate the effect of the probing-light detuning on the reconstruction uncerainty and, hence, demonstrate the potential of this approach, we perform a series of reconstructions of a state generated under the same conditions but reconstructed using different probing-light detunings. In our experiment, the detuning is changed from 50 to 270 MHz. The results of these investigations are shown in Fig. <ref>(b). They demonstrate that the reliability of the reconstruction deteriorates with the detuning and it achieves the minimum in the vicinity of the center of the Doppler-broadened f=1→ F=2 transition. This agrees with our theoretical prediction of the condition-number detuning dependence, which we calculate assuming that ζ(Δ)=V_I(Δ)/V_R(Δ).
§.§.§ Conditional number versus the number of measurements
The repetition of specific measurements offers a straightforward and versatile method for optimizing the relative weights of the observables used in the state-reconstruction procedure. This approach allows for achieving an arbitrarily small condition number, making it particularly valuable when the previous method is infeasible or when the condition number is desired to be smaller than the detuning-optimized bound (e.g., 2.25). However, it should be noted that this technique is associated with a potential drawback; the number of repetitions required to attain κ(ℂ)=1 is typically substantial, especially when dealing with initially high condition numbers, as illustrated in Fig. <ref>.
§ CONCLUSIONS
In this study, we presented the first experimental implementation of a quantum-state tomography technique, which was originally proposed in Ref. <cit.>. The technique enabled us the successful reconstruction of collective quantum states of a qutrit in room-temperature rubidium vapor at the f=1 ground state with a fidelity of 0.99. To overcome experimental challenges of the reconstruction, we adapted the CYCLOPS technique, which allowed us to achieve reliable reconstruction by mitigating a problem of unknown phase delays present in measured signals. Additionally, we presented a comprehensive analysis of the technique by introducing the conditional number, which quantifies the reliability of the reconstruction. This parameter was investigated versus different experimental factors, including tuning of the probing light used for the reconstruction. We demonstrated that by appropriate tuning of the light, the conditional numbers as low as 2.25 can be achieved (where the conditional number of 1 refers to ideal reconstruction). We also demonstrated that further improvement of the reconstruction (lowering the conditional number) can be achieved by the repetition of the specific measurements.
The successful implementation of the presented QST technique opens up avenues for measuring a range of fundamental properties of qutrits. In future, we plan to focus on exploring different measures of nonclassicality and establishing their ordering for various classes of quantum states. We also plan on a further development of the technique to demonstrate quantum-process tomography, expanding the method capabilities in the characterization of quantum operations and transformations. Finally, the ability to accurately reconstruct the quantum states of atomic ensembles allows for experimental optimization of generation of metrologically appealing quantum states. This is the research direction that we currently pursuit in our work.
§ ACKNOWLEDGEMENTS
The authors would like to thank Arash D. Fard for his help in experimental measurements. The work was supported by the National Science Centre, Poland within the SONATA BIS programme (Grant No. 2019/34/E/ST2/00440). MK would like to acknowledge support from the Excellence Initiative – Research University of the Jagiellonian University in Kraków. A.M. is supported by the Polish National Science Centre (NCN) under the Maestro Grant No. DEC-2019/34/A/ST2/00081.
[pages=-]supplementary_information.pdf
|
http://arxiv.org/abs/2307.03012v2
|
20230706142121
|
A Kerr-Newman-MOG black hole's impact on the magnetic reconnection
|
[
"Sanjar Shaymatov",
"Mirzabek Alloqulov",
"Bobomurat Ahmedov",
"Anzhong Wang"
] |
gr-qc
|
[
"gr-qc",
"astro-ph.HE",
"hep-th"
] |
[email protected]
Institute for Theoretical Physics and Cosmology, Zhejiang University of Technology, Hangzhou 310023, China
Akfa University, Milliy Bog Street 264, Tashkent 111221, Uzbekistan
Institute of Fundamental and Applied Research, National Research University TIIAME, Kori Niyoziy 39, Tashkent 100000, Uzbekistan
National University of Uzbekistan, Tashkent 100174, Uzbekistan
[email protected]
New Uzbekistan University, Mustaqillik Ave. 54, Tashkent 100007, Uzbekistan
Institute of Fundamental and Applied Research, National Research University TIIAME, Kori Niyoziy 39, Tashkent 100000, Uzbekistan
Ulugh Beg Astronomical Institute, Astronomy St. 33, Tashkent 100052, Uzbekistan
[email protected]
Institute of Fundamental and Applied Research, National Research University TIIAME, Kori Niyoziy 39, Tashkent 100000, Uzbekistan
National University of Uzbekistan, Tashkent 100174, Uzbekistan
Ulugh Beg Astronomical Institute, Astronomy St. 33, Tashkent 100052, Uzbekistan
[email protected]
GCAP-CASPER, Physics Department, Baylor University, Waco, TX 76798-7316, USA
In this paper, we study the magnetic reconnection process of energy extraction from a rapidly rotating Kerr-Newman-MOG black hole by investigating the combined effect of black hole charge and the MOG parameter. We explore the energy efficiency of energy extraction and power by applying the new energy extraction mechanism
proposed by Comisso and Asenjo. Based on an attractive gravitational charge of the MOG parameter α that physically manifests to strengthen black hole gravity we show that the combined effect of the MOG parameter and black hole charge can play an increasingly important role and accordingly lead to high energy efficiency and power for the energy extraction via the magnetic reconnection. Further, we study to estimate the rate of energy extraction under the fast magnetic reconnection by comparing the power of the magnetic reconnection and Blandford-Znajek (BZ) mechanisms. We show that the rate of energy extraction increases as a consequence of the combined effect of black hole charge and MOG parameter. It suggests that magnetic reconnection is significantly more efficient than BZ. In fact, the magnetic reconnection is fueled by magnetic field energy due to the twisting of magnetic field lines around the black hole for the plasma acceleration, and thus MOG parameter gives rise to even more fast spin that can strongly change the magnetic field reconfiguration due to the frame dragging effect. This is how energy extraction is strongly enhanced through the magnetic reconnection, thus making the energy extraction surprisingly more efficient for the Kerr-Newman-MOG black hole than Kerr black hole under the combined effect of black hole charge and MOG parameter.
A Kerr-Newman-MOG black hole's impact on the magnetic reconnection
Anzhong Wang
August 1, 2023
==================================================================
§ INTRODUCTION
In general relativity (GR) astrophysical black holes are formed as a consequence of the end state of evolution of massive stars via gravitational collapse and are regarded as the most intriguing and fascinating objects not only for their extreme geometric and
remarkable gravitational aspects but also for their important role in explaining highly energetic astrophysical events. In astrophysical phenomena such as active galactic nuclei (AGN) <cit.>, gamma-ray bursts <cit.> and ultraluminous x-ray binaries <cit.> an enormous amount of energy is released. It is believed that astrophysical black holes come into play a key role in these extremely powerful astrophysical phenomena <cit.>. Black holes can therefore be considered the most powerful energy sources in the universe and are related to astronomical observations associated with outflows from active galactic nuclei with the energy, which is of order 10^42-10^47 erg/s <cit.>. However, the tremendous amount of energy produced by these events is believed to have two likely origins: (i) it might be the gravitational potential energy of matter falling towards the black hole during the accretion phase or it might be the energy of falling matter during gravitational collapse phase of a black hole. (ii) Secondly, it might possibly be the black hole's own energy, as predicted by general relativity in the case of a rotating black hole. As a consequence, understanding the genesis of the highly powerful astrophysical phenomena in the vicinity of a black hole has profound ramifications and is fundamentally important.
The question, could rotational energy be extracted out from a rotating black hole was first formulated by Penrose <cit.> and it was shown that it can be extracted. What happens is that particle that comes from infinity could have negative energy with respect to infinity and splits into two parts in the ergosphere, i.e., one falls into black hole with negative energy and the other escapes with energy greater than that of the original particle. This is how black hole rotational energy E_rot=(1-1/√(2))M≈ 0.29 M could be extracted out via the Penrose process. Later, this process was then extended to many various situations; here we give some of references. The effect of gravitomagnetic charge of the black hole on the energy that can be extracted by Penrose process was addressed <cit.> and it was found that its impact increases the amount of the extracted energy. Penrose process was also extended to the spinning test particles <cit.> and rotating regular black hole <cit.>.
On the other hand, the magnetic field becomes increasingly important in modeling new alternative thought mechanisms for extracting out rotational energy from rotating black holes. With this respect, Blandford and Znajek first addressed the effect of the magnetic field on the energy extraction process in the accretion disks of AGNs <cit.> and it has since been known as the Blandford-Znajek mechanism (BZ) <cit.>. The impact of the magnetic field has been extended to a large variety of situations over the years <cit.>. Later, to bring out the impact of magnetic field on the efficiency of energy extraction Pensose process was considered as magnetic Penrose process <cit.>. It has since been shown that magnetic Penrose process becomes more effective process for extracting the energy from rotating black holes through the magnetic field <cit.>.
Further, a thought experiment for these highly powerful astrophysical phenomena was proposed by Banados, Silk and West (BSW) by considering high energy particle collisions in the close vicinity of the black hole horizon <cit.> and it was shown that it leads to high energy that can be extracted through this process. For that an extremal Kerr black hole can act as a particle accelerator to arbitrary high energies produced by the collision of two particles. Following Banados, Silk and West <cit.> there has since been an extensive body of work considering various contexts
<cit.>.
A new thought explanation for high energy observations has recently been proposed independently by considering the magnetic reconnection process near the horizon of rotating black hole. It is envisaged by the fact that the frame dragging effect of a rotating black hole can twist the magnetic field lines, thus resulting in causing antiparallel magnetic field lines in the equatorial plane. The magnetic reconnection process occurring in the ergosphere on the surrounding environment of black hole can accelerate particles, some of which do however attain negative energy and get absorbed by black hole while the other accelerated particles come out with positive energy that can be extracted from the black hole as the stolen energy. This is how rotational energy of back hole could be driven out through the magnetic reconnection occurring continuously inside the ergosphere because of black hole fast spin. This defines the main difference in contrast to the above mentioned mechanisms for energy extraction. Analysis associated with this process was addressed by Koide and Arai <cit.> using the slow magnetic reconnection so as to attempt the energy extraction from black hole. It was then realized that this scenario would not be viable to extract out the energy from black hole. It was however suggested that the relativistic reconnection needs to required as the most promising condition for energy extraction from black hole as a form of the outflow jets due to the magnetic field configuration. Later, it was also suggested that the accelerated particles under the magnetic reconnection can attain negative energy as stated by the general-relativistic kinetic simulations and the energy extraction that could be driven out from black hole due to these negative energy particles would be comparable with the one extracted by BZ mechanism <cit.>. However, there existed no evaluation of energy released through magnetic reconnection.
Recently, unlike the above mentioned scenarios, the energy extraction through the magnetic reconnection process was approached from a different perspective by Comisso and Asenjo <cit.> considering a rapidly spinning Kerr black hole. To that a novel mechanism was first proposed by Comisso and Asenjo, thus allowing one to compute the energy efficiency and the power for energy extraction via the magnetic reconnection process. It was shown that this novel mechanism can be considered as an efficient mechanism for energy extraction since the efficiency and the power get affected strongly by black hole spin through the magnetic reconnection. It is also worth noting that this mechanism is now well accepted mechanism for energy extraction via the magnetic reconnection, accordingly referred to as the Comisso-Asenjo mechanism which we further apply for our detailed analysis in this study. The relevance of the energy extraction from the Comisso-Asenjo mechanism via the magnetic reconnection has since been considered in several recent investigations for rapidly spinning black holes <cit.>.
In this paper we consider a rotating Kerr-Newman-MOG black hole immersed in an external magnetic field, as presented by the line element described in Ref. <cit.>. For this black hole spacetime geometry, we study the energy extraction from rotating Kerr-Newman-MOG black hole and analyze the impact of this geometry on the magnetic reconnection process from evaluating the energy efficiency of energy extraction and the power which are given as a function of black hole spin and MOG parameter, the location, plasma magnetization parameter, and magnetic field orientation by imposing all required conditions.
This paper is organized as follows: In Sec. <ref> we briefly describe the Kerr-Newman-MOG black hole spacetime and particle dynamics. In Sec. <ref> we explore the impact of the rotating Kerr-Newman-MOG black hole on the magnetic reconnection and further study the energy extraction by the magnetic reconnection. We further explore the power, the energy efficiency and the rate of energy extraction through the magnetic reconnection mechanism in Sec. <ref>. Finally, we end up with concluding remarks in Sec. <ref>. Throughout the manuscript we use a system of units in which G=c=1.
§ KERR-NEWMAN-MOG BLACK HOLE METRIC AND PARTICLE DYNAMICS
In Boyer-Lindquist coordinates, the background geometry of the Kerr-Newman-MOG spacetime is given by <cit.>
ds^2 = -Δ/ρ^2 [dt-asin^2 θ dϕ ]^2 +ρ^2 [ dr^2/Δ + dθ^2 ]
+ sin^2 θ/ρ^2[ (r^2+a^2)dϕ -adt ]^2 ,
where
ρ^2 = r^2 + a^2 cos^2 θ ,
Δ = r^2-2G_N(1+α)Mr + a^2 +Q^2
+ G^2_Nα (1+α) M^2 .
The MOG parameter α is a dimensionless measure of the difference between the Newtonian gravitational constant G_N and the additional gravitational constant G
α=G-G_N/G_N .
The ADM mass and the angular momentum of the Kerr-MOG black hole are given by <cit.>
ℳ=(1+ α)M .
The function Δ can be re-written in terms of the ADM mass
Δ=r^2 -2ℳr+a^2+Q^2+α/1+αℳ^2 ,
where we have set G_N=1 without loss of generality. The spatial locations of the horizons are the roots of Δ as
r_H=ℳ±√(ℳ^2/1+α-a^2-Q^2) .
Notice that the parameters of the Kerr-Newman-MOG space-time represent a black hole surrounded by an event horizon provided that
ℳ^2 ≥ (1+α) (a^2+Q^2) ,
where the inequality corresponds to the case of an extremal black hole. If it is not satisfied the black hole can no longer exist. On the other hand, there also exists another key point that the Kerr-Newman-MOG black hole rotates with the spin greater than that of one for the Kerr black hole due to the effect of MOG parameter α; see in Fig. <ref>. In top and middle panels of Fig. <ref>, we demonstrate the parameter space plot between the charge parameter Q and the spin parameter a of the Kerr-Newman-MOG BH for various combinations of the MOG parameter α in the range 0 to 0.5 and 0.5 to 1.0, respectively. It is interestingly seen from Fig. <ref> that the radius of parameter space increases as the MOG parameter increases up to α=0.5. It does however decrease for 0.5<α<1. Similarly, in bottom panel of Fig. <ref> we show parameter space plot between the MOG parameter α and the spin parameter a of the Kerr-Newman-MOG BH for various combinations of black hole charge parameter Q in the range 0 to 0.95. As can be seen from Fig. <ref> black hole can exist in the shaded region which is separated from no black hole regions by the curves. We note that Δ= 0 always has real root for any value of the spin a and charge Q parameter in the case of MOG parameter α.
There is also another static surface that is estimated by the timelike Killing vector ξ^μ_(t) = ∂/∂ t, i.e., g_tt= 0 which solves to give
r_E=ℳ +√(ℳ^2/1+α-Q^2-a^2cos^2θ) .
Note that it is not possible for any particle to be static at a fixed point below this surface. Thus, the region existing between the surface r_E and the horizon r_H refers to the ergosphere, where a timelike particle's energy E may become negative relative to an observer at infinity. In Fig. <ref> we plot the behaviour of the ergosphere for various possible cases in the presence of black hole charge and MOG parameters.
Vector potential of the electromagnetic field around the Kerr-MOG black hole has the form as <cit.>
A^μ = [-Qr/r^2-2f+aB(1+f_2 sin^2θ/r^2)]ξ^μ_t
+ [B/2(1+2f_2/r^2)-Qa/r(r^2-2f)]ξ^μ_ϕ,
A^μ = [Qr/r^2-2f-aB(1+f_2 sin^2θ/r^2)]ξ^μ_t
+ [-B/2(1+2f_2/r^2)+Qa/r(r^2-2f)]ξ^μ_ϕ,
where we have defined f=f_1 r+f_2. Note that f_1 and f_2 are defined by
f_1 = (1+α)M ,
f_2 = -(1+α)(α M^2+Q^2)/2 .
Let us then consider a particle motion with rest mass m and charge q around the Kerr-Newman-MOG black hole immersed in an external magnetic field. Note that particle motion at the circular orbit around black hole is also important point for magnetic reconnection process which we further explore in this paper. The magnetic field can affect on the charged particle motion drastically due to the strong Lorentz force even though it is enough small <cit.>. From the asymptotic properties of black hole the magnetic field is assumed to be uniform and oriented along the axis of symmetry of the black hole. The Hamiltonian that completely describes the system can be written as follows <cit.>:
H ≡1/2g^αβ(π_α - q A_α)(π_β - q A_β) ,
where we have defined π_α that is referred to as the canonical momentum of a charged test particle. Here, A_α=A_μ(r,θ) respectively refers to the four-vector potential of the electromagnetic field and is defined by Eq. (<ref>), while the Hamiltonian turns out to be constant as H=-m^2/2.
For the charged particle, its four-momentum can be written as
p^α≡dx^α/dλ = g^αβ(π_β - q A_β) ,
where λ=τ/m respectively corresponds to an affine parameter with the proper time, τ. Let us then write Hamilton's equations of motion in terms of x^α and π_α, which are given by
dx^α/dλ = ∂ H/∂π_α
dπ_α/dλ = - ∂ H/∂ x^α .
We note that we further restrict our attention to the equatorial plane, i.e., θ=π/2. As was mentioned, the first one in the above equations is referred to as a constraint equation that defines the four-momentum of the charged particle. Thus, the equations of motion for the charged particle can be defined as
p^t = 1/r^2[ a (π_φ+aπ_t)+r^2+a^2/ΔP],
p^φ = 1/r^2[ (π_φ+aπ_t)+a/ΔP] ,
p^r = (P^2-Δ[r^2+(π_φ+aπ_t)^2]/r^4)^1/2 ,
where we have defined P=(r^2+a^2)(-π_t)-a π_φ. For the Hamilton-Jacobi equation, the action S can be separated in such a way that it takes the form as
S= 1/2m^2λ-Et+Lφ+S_r(r)+S_θ(θ) .
Here, accordingly E ≡ -π_t and L ≡π_φ are constants of motion and correspond to the specific energy and angular momentum of the charged particle, respectively. There exists a fourth constant of motion except the rest mass of the test particle m, which is related to the latitudinal motion. However, we further omit this constant since we focus on the motion that takes place in the equatorial plane.
The rest terms such as S_r and S_θ turn out to be functions related to r and θ, respectively. Following Eqs. (<ref>) and (<ref>), the Hamilton-Jacobi equation is given by
- [(r^2+a^2)^2/Δ-a^2sin^2θ](E+qA_t)^2
+Δ(∂ S_r/∂ r)^2
+ 2a (2ℳ-α/r(1+α)ℳ^2-Q^2/r)r/Δ(E+qA_t)(L-qA_φ)
+ (∂ S_θ/∂θ)^2+
[1/sin^2θ-a^2/Δ](L-qA_φ)^2+m^2Σ=0 .
On the basis of Eq. (<ref>), the radial equation of motion for the charged particle moving on the equatorial plane (i.e. θ=π/2=const) can be defined by
1/2ṙ^2 + V_eff(r)=0,
where V_eff(r) refers to as the effective potential for radial motion and is given by general form as
V_eff(r) = -1/2r^2[(r^2+a^2+2A(r)a^2/r).
(ℰ+q/mA_t)^2
- (1-2A(r)/r)(ℒ-q/mA_ϕ)^2-Δ
- .4A(r) a (ℰ+q/mA_t) (ℒ-q/mA_ϕ)/r] ,
with
A(r)=(ℳ-α/2 r(1+α)ℳ^2-Q^2/2r) .
Here we have defined specific constants as ℰ=E/m, ℒ=L/m.
We then turn to the innermost stable circular orbit (ISCO) for test particles moving around the black hole as the ISCO radius is an important key for the magnetic reconnection process that occurs in the ergo region. To determine the ISCO the following standard conditions must be satisfied:
V_eff(r)=0, V_eff^'(r)=0 V_eff^''(r)≥ 0 ,
where we have defined ^' as a derivative with respect to r. We here note that since the analytic forms of the ISCO turns to be very long and complicated expressions for explicit display we shall for simplicity further resort to numerical evaluation for further analysis. For magnetic reconnection, one needs to consider co-rotating orbits as it occurs in the ergosphere near the black hole horizon. In Table <ref>, we demonstrate the numerical values of the ISCO parameters for the test particle moving at the ISCO radius around the Kerr-Newman-MOG black hole for various possible cases. Here we mainly focus on MOG parameter to understand more deeply its impact on the ISCO parameters, as seen in Table <ref>. We, therefore, consider black hole spin and charge parameters to be small. As can be seen from Table <ref>, the ISCO radius grows as as consequence of an increase in the value of MOG parameter. One can then deduce that this is consistent with the interpretation of
MOG parameter as an attractive gravitational charge that physically manifests to strengthen black hole gravity. Taking this remarkable aspect of MOG parameter α into consideration plays a very crucial role in modelling the magnetic reconnection process for Kerr-Newman-MOG black hole. We do therefore consider this key point to evaluate the efficiency of energy extraction, power, the phase-space region and the rate of energy extraction via magnetic reconnection for further analysis.
§ ENERGY EXTRACTION THROUGH MAGNETIC RECONNECTION
PROCESS
Black holes are most fascinating objects as that of their rich astrophysical energetic phenomena, i.e., the outflows coming out from active galactic nuclei with the energy range of order E≈ 10^42-10^47 erg/s observed with the help of x-ray, γ-ray and Very Long Baseline Interferometry (VLBI) observations <cit.>. These outflows in the form of winds and jets are related to the charged particle motion moving on the accretion disk around black hole. To explore this energetic phenomenon we analyse the magnetic reconnection process by applying the recently proposed Comisso-Asenjo mechanism <cit.>. For that we first consider the zero angular momentum observer (ZAMO) frame and then we examine the plasma energy density. For the above mentioned ZAMO frame the line element is written as follows:
ds^2=-dt̂^2+∑_i=1^3(dx̂^̂î)^2=g_μνdx^μdx^ν ,
where we define dt̂ and dx̂^̂î as follows:
dt̂=α dt dx̂^̂î=√(g_ii)dx^i-αβ^ϕdt ,
with α and β^i=(0,0,β^ϕ) that respectively refer to the laps function and the shift vector and are written as
α=√(-g_tt+g_ϕ t^2/g_ϕϕ)β^ϕ=√(g_ϕϕ)ω^ϕ/α .
Note here that ω^ϕ=-g_ϕ t /g_ϕϕ is referred to as the frame dragging velocity of zero angular momentum particle. For the ZAMO frame the vector
ψ has the following covariant and contravariant components
ψ̂_̂0̂=ψ_0/α+∑_i=1^3 β^i/g_iiψ_i ψ̂_̂î=ψ_i/√(g_ii) ,
ψ̂^̂0̂=αψ^0 ψ̂^̂î=√(g_ii)ψ^i-αβ^i ψ^0 .
It is well known that the ergosphere where ϵ = -p.∂/∂ t becomes negative, thus resulting in having negative energy for a timelike particle. This plays a key role in energy extraction by the magnetic reconnection process that can be evaluated in the region between static surface r_st and the horizon r_+. In the case of Kerr-Newman-MOG black hole the ergosphere is illustrated in Fig. <ref>. For this process, the key property is to accelerate the plasma with high energy. As a consequence of the magnetic reconnection process, the accelerated plasma comes out with arbitrarily high energy. Otherwise, it is decelerated and it then attains negative energy so as to get absorbed by the black hole. For being more accurate we define the energy-momentum tensor for the one-fluid approximation of the plasma, and it is given by
T^μν=pg^μν+𝑤U^μU^ν+F^μ_δF^νδ-1/4g^μνF^ρδF_ρδ .
Note that in the above equation p and 𝑤, respectively, refer to the proper plasma pressure and enthalpy density, while U^μ and F^μν to four-velocity and the tensor of the electromagnetic field. Here we note that the enthalpy density is given by 𝑤=e_int+p, where the thermal energy density is defined by <cit.>
e_int=pΓ-1+ρ c^2 ,
with Γ and ρ which respectively refer to the adiabatic index and the proper mass density. By imposing this condition we further define the relativistic hot plasma with the equation of state.
The energy density at infinity can be given by the following relation
e^∞=-α g_μ 0T^μ 0 .
Taking this into account one can be able to write the energy density at infinity as follows:
e^∞=αê+αβ^ϕP̂^ϕ ,
where ê and P̂^ϕ, respectively, define the total energy density and azimuthal component of the momentum density, and they are given by
ê=𝑤γ̂^2-p+B̂^2+Ê^2/2 ,
P̂^ϕ=𝑤γ̂^2v̂^ϕ+(B̂×Ê)^ϕ ,
where v̂^ϕ represents the azimuthal component of the plasma velocity at the ZAMO. The Lorentz factor γ̂ and electric Ê^i and magnetic B̂^i field components that appear in the above equation are defined by
γ̂=Û^0=√(1-∑_i=1^3(dv̂^i)^2) ,
B̂^i=ϵ^ijkF̂_jk/2 Ê^i=η^ijF̂_j0=F̂_i0 .
It is worth noting here that the energy density at infinity e^∞ consists of two parts, i.e., the hydrodynamic and electromagnetic parts with e^∞=e_hyd^∞+e_em^∞ which can be written separately as follows:
e_hyd^∞=αê_hyd+αβ^ϕ𝑤γ̂^2v̂^ϕ ,
e_em^∞=αê_em+αβ^ϕ(B̂×Ê)_ϕ ,
where ê_hyd=𝑤γ̂^2-p and ê_em=(B̂^2+Ê^2)/2, respectively, refer to the energy densities of the hydrodynamic and electromagnetic fields at the ZAMO. We then need to evaluate the energy density at infinity. For that, we assume that the contribution of the electromagnetic field can be expelled out since its effect is very small in contrast to the hydrodynamic energy density at infinity. However, we note that most part of the magnetic field energy can be transferred to the plasma kinetic energy in the magnetic reconnection process. Taking all together we shall further assume that we apply for incompressible and adiabatic plasma for the approximation. In doing so, the energy density at the infinity is then defined by the following form as <cit.>
e^∞=e^∞_hyd=α𝑤γ̂(1+β^ϕv̂^ϕ)-α p/γ̂ .
Next, one needs to make the magnetic reconnection process the localized one so as to determine it at a small scale. For that the local rest frame x'^μ=(x'^0,x'^1,x'^2,x'^3) comes into play because of the
the bulk plasma that orbits at the equatorial plane around the black hole with the Keplerian frequency/angular velocity Ω_K, which can be defined by
Ω_K=d ϕd t=-g_t ϕ,r+√(g^2_t ϕ,r-g_tt,rg_ϕϕ,r)g_ϕϕ,r .
The Keplerian frequency for Kerr-Newman-MOG black hole then reads as follows:
Ω_K = a (Q^2-(α +1) (r-α ))/r^4+a^2 (Q^2-(α +1) (r-α ))
+ r^2 √((α +1) (r-α )-Q^2)/r^4+a^2 (Q^2-(α +1) (r-α )) .
It is worth noting that the direction of x'^μ is chosen so that x'^1 and x'^3 must be parallel to the radial x^1=r and the azimuthal x^3=ϕ directions, respectively. Following to Eq. (<ref>) we further consider the co-rotating Keplerian frequency at the ZAMO, and it is given by
v̂_K = dx̂^ϕ/dx̂^t
=√(g_ϕϕ)dx^ϕ/dλ-αβ^ϕdx^t/dλ/α dx^t/dλ
= √(g_ϕϕ)/αΩ_K-β^ϕ .
One can then obtain the forms of v̂_K and the Lorentz factor γ̂_K=1/√(1-v̂_K^2) by imposing the Keplerian frequency Ω_K given by Eq. (<ref>).
The rotation energy of a black hole that can be extracted by the magnetic reconnection process extremely depends upon the plasma dynamics and electromagnetic
filed properties. As was mentioned, we assume a one-fluid plasma that obeys adiabatic and incompressible plasma approximation so that the hydrodynamic energy per enthalpy at the infinity reads as follows <cit.>:
ϵ^∞_± = αγ̂_K [(1+β^ϕv̂_K)√(1+σ_0)±cosξ(β^ϕ+v̂_K)√(σ_0)
- √(1+σ_0)∓cosξv̂_K√(σ_0)/4γ̂^2(1+σ_0-cos^2ξv̂_K^2σ_0)] ,
with σ_0=B_0^2/𝑤 and ξ, which respectively refer to the plasma magnetization and the orientation angle between the magnetic field and the outflow plasma directions at the equatorial plane. To extract the black hole rotational energy on the basis of the magnetic reconnection process the hydrodynamic energy would be positive when the plasma is accelerated. In contrast, the energy is negative when it is decelerated near the black hole horizon, similar to what is observed for the Penrose process. Thus, the energy that can be extracted by the magnetic reconnection process should be positive and be always much greater than thermal energies and the rest mass of the plasma as well. We shall further assume that the plasma is a relativistic hot plasma that satisfies the condition for the equation of state, i.e., 𝑤 = 4p <cit.>. With this in view, the accelerated and decelerated energies of the plasma which can be measured at infinity read as follows:
ϵ_-^∞<0 Δϵ_+^∞>0 ,
where Δϵ_+^∞ is given by
Δϵ_+^∞=ϵ_+-(1-Γ/Γ-1p/𝑤)>0 .
Taking the polytropic index as Γ=4/3 allows one to have the form as Δϵ_+^∞=ϵ_+^∞>0 for the relativistic hot plasma.
We now analyze the accelerated ϵ_+^∞ and decelerated ϵ_-^∞ energies of the plasma to explore the remarkable aspects of energy extraction through the magnetic reconnection process for Kerr-Newman-MOG black hole. However, the analytic forms of ϵ_+^∞ and ϵ_-^∞ energies turn to be very long and complicated expressions for explicit display. We therefore further resort to
numerical evaluation of these plasma energies. In Fig. <ref>, we show the energies ϵ_+^∞ and ϵ_-^∞ as a function of the plasma magnetization parameter for various possible cases. As seen in Fig. <ref>, the left panel shows the impact of the black hole spin parameter a on the plasma magnetization profile of the accelerated and decelerated energies for Q=0 and α=0, while, similarly, the right panel shows the impact of the MOG parameter α for fixed a=0.99 and Q=0.1. As shown in Fig. <ref>, the accelerated and and decelerated energies per enthalpy increase as the MOG parameters is taken into account for spin parameter a=0.99, thus resulting in compensating the impact of spin parameter a on these energies, i.e, ϵ_+^∞ and ϵ_-^∞. This happens because the spin parameter turns out to be a>1 as a consequence of the presence of α. Unlike Kerr black hole, the maximum energy is not restricted by a=1 for the Kerr-Newman MOG black hole case, thus leading to arbitrarily high energy that can be extracted by the magnetic reconnection process as a consequence of an increase in the value of MOG parameter α. We also further explore another important parameter region, (a, r/M), which we refer to the phase-space region for the energy extraction through the magnetic reconnection process. In the phase-space region, the condition ϵ_+^∞>0 and ϵ_-^∞<0 is always satisfied so that the energy can be extracted from the black hole, i.e., η=ϵ_+^∞/(ϵ_+^∞+ϵ_-^∞)>1. Fig. <ref> reflects the role of the magnetization parameter σ_0 (top row, left and right panels) and the orientation angle ξ (bottom row, left and right panels) on the regions of the phase-space (a, r/M). From top row panels of Fig. <ref>, it is clearly seen that the phase-space region, in which the energy can be extracted via the magnetic reconnection shifts towards up to larger location r and to smaller values of the spin parameter a as a consequence of an increase in the value of magnetization parameter σ_0 of the plasma for fixed orientation angle η=π/12. It turns out that the large values of σ_0 give rise to an expanded phase-space region that, in turn, leads to arbitrarily high energy extracted by the magnetic reconnection process. Similarly, the phase-space region gets also influenced strongly by the orientation angle ξ of the reconnecting magnetic field, as seen in the bottom panels of Fig. <ref>. That is, the phase-space region for energy extraction can extend towards up to larger r and to lower a by decreasing the orientation angle η for fixed magnetization parameter σ_0=100. This happens because a small orientation angle would lead to making the azimuthal component of the outflow plasma velocity dominate over its rest components, thus resulting in giving the main contribution to the energy extraction process. The most important key point to be noted here is that the combined effect of black hole charge and MOG parameter can extend the phase-space for the energy extraction condition, i.e., δϵ^∞_+>0 (grey area), as seen in both top and bottom right panels of Fig. <ref>.
Next, we investigate the power and energy efficiency via the magnetic reconnection process for the Kerr-Newman-MOG black hole.
§ POWER AND MAGNETIC RECONNECTION EFFICIENCY
In this section, following to the Comisso-Asenjo mechanism <cit.> originally suggested for the Kerr black hole, we consider the power and the energy efficiency via the magnetic reconnection process and resort to the numerical evaluation of these quantities for the Kerr-Newman-MOG black hole. It is worth noting that energy efficiency and power are strongly related to the negative energy of decelerated plasma which is absorbed by a black hole in the unit time. We first turn to the power of the energy extraction in which it is envisaged that the power that is extracted from a black hole due to the escaping plasma can be defined by <cit.>
P_MR=-ϵ_-^∞𝑤_0 A_in U_in ,
where U_in defines the regime, i.e., U_in= O(10^-1) refers to the collisionless regime, while U_in= O(10^-2) to collisional regime. Also, A_in given in the above expression defines the cross sectional area for the inflowing and is evaluated as A_in=(r^2_E-r^2_ph) for rotating black holes. Note that r_E is given by Eq. (<ref>), while we resort to the numerical estimation of r_ph which can tend to r_H at the extremal black hole case.
In Fig. <ref>, we show the radial profile of the power, P_e/𝑤_0, that can be extracted from black hole by the outflowing plasma generated by the magnetic reconnection. As can be seen from <ref>, the panel in the top row reflects the role of the magnetization parameter σ_0 for the radial profile of the power while keeping black hole spin a, charge Q and MOG parameter α fixed and the orientation angle ξ=π/12, while the right panel reflects the role of ξ in the case in which the parameters a, Q, α and σ_0=10^4 are fixed. Similarly, the panels in the middle row of Fig. <ref> show similar behavior of the radial profile of the power as a consequence of the presence of the MOG parameter α that would lead to higher spin a>1 relative to the Kerr black hole case. As seen in the top row of Fig. <ref>, the height of the power extracted from the black hole increases and its shape shifts towards up to its higher values as the magnetization parameter σ_0 and the orientation angle η increase. We also show that the inclusion of MOG parameter α makes the power more effective, thus resulting in shifting its height towards up to larger values of the power, as seen in the middle row of Fig. <ref>. We must ensure that the inclusion of MOG parameter α leads to arbitrarily high power extracted by the magnetic reconnection. To that we show the combined effect of black hole charge and MOG parameter in the bottom panel of Fig. <ref>. One can then infer that the combination of black hole charge and the MOG parameter α leads to arbitrarily high power through the magnetic reconnection.
Let us then turn to another key aspect of energy extraction via magnetic reconnection. To understand how magnetic reconnection is an effective model, one needs to analyze the energy release that can be extracted by the model considered here. Therefore, we need to evaluate the total amount of energy released by the Kerr-Newman-MOG black hole. We first would like to emphasize that the magnetic field energy during the magnetic reconnection, after being distributed, consists of two parts of energies, i.e., the decelerated plasma and the accelerated plasma energies The former one having negative energy is absorbed by the black hole, while the latter one with arbitrarily high positive energy comes out towards to larger r distances from the black hole. With this in view, the energy efficiency of the plasma through the magnetic reconnection can generally be
defined by
η=ϵ_+^∞/ϵ_+^∞+ϵ_-^∞ ,
where ϵ_+^∞ and ϵ_-^∞ respectively define the accelerated and decelerated plasma
energies, as mentioned above. The key point to be noted here is that for the plasma energy to be released from a black hole the condition η=ϵ_+^∞/(ϵ_+^∞+ϵ_-^∞)>1 must always be satisfied for the energy efficiency. We further then analyze the energy efficiency released by the magnetic reconnection.
In Fig. <ref>, we show the radial profile of the energy efficiency for the outflowing plasma generated by the magnetic reconnection for various combinations of black hole spin parameter a and the MOG parameter α. It is easily notable from the left panel of Fig. <ref>, the energy efficiency grows as a function of r/M in the close vicinity of black hole the as spin parameter a increases for fixed black hole charge Q=0.14 and the orientation angle η=π/12, thus resulting in increasing the maximum of efficiency. We also find that the inclusion of MOG parameter becomes increasingly important role in approaching high energy efficiency for the energy extraction via the magnetic reconnection process. From the right panel of Fig. <ref>, the presence of MOG parameter can influence on the energy efficiency. That is, the efficiency of energy extraction via the magnetic reconnection continues to grow and reach its possible maximum at the horizon. Here, the key point to be noted is that the MOG parameter interprets as an attractive gravitational charge that can physically manifests to strengthen black hole gravity, thus allowing Kerr-Newman-MOG black hole to have the spin parameter a>1 greater than that of Kerr black hole spin. It does also increase the black hole horizon. The energy efficiency is therefore higher than the one for Kerr black hole for non-extremal cases, as seen in the right panel of Fig. <ref>. One can the deduce that an attractive gravitational property of the MOG parameter and its combined effect with black hole charge can make the efficiency of energy extraction become significantly efficient through the magnetic reconnection.
Finally we turn to compare the power of the magnetic reconnection and Blandford-Znajek mechanisms in order to estimate the rate of energy extraction under the fast magnetic reconnection. To that, we first consider the rate of energy extraction for the BZ mechanism at the horizon, and it is defined by <cit.>
P_BZ=κ/16πΦ^2_BHΩ_H^2
[1+χ_1 Ω_H^2+χ_2Ω_H^4+𝒪(Ω_H^6)] ,
where Φ_BH and Ω_H respectively denote the magnetic flux and the angular velocity Ω_H at the horizon which is given by
Ω_H=a/2ℳr-Q^2-α/1+αℳ^2 ,
while κ, χ_1 and χ_2 refer to numerical constants. Here we note that κ has a relation with geometry configuration <cit.>. The magnetic flux is then defined by Φ_BH=1/2∫_θ∫_ϕ|B^r|dA_θϕ, and we shall for simplicity assume Φ_BH∼ 2π B_0 r_H^2 sinξ for further analysis.
Taking all together the ratio of these two powers P_MR/P_BZ can then be written by <cit.>
P_MRP_BZ=-4 ϵ_-^∞𝑤_0 A_in U_inπκ Ω^2_H r^4_H σ_0 sin^2ξ(1+χ_1 Ω^2_H +χ_2 Ω^4_H) ,
where we note that the values of these numerical constants have been approximated as χ_1≈ 1.38 and χ_2≈ -9.2, while κ≈ 0.044 due to the magnetic field geometry (see for example <cit.>).
In Fig. <ref> we show the behaviour of P_MR/P_BZ against the plasma magnetization for various combinations of the location r/M while keeping black hole parameters fixed. From the left panel of Fig. <ref> the height of the ratio of powers decreases and the curves shift right to larger σ_0 as the location r/M increases. It does however satisfy P_MR/P_BZ>1 that makes the magnetic reconnection significantly more efficient efficient than the one for BZ. Similarly to what is observed in the left panel of Fig. <ref>, the right panel reflects the impact of the combined effect of black hole charge and MOG parameter on the power ratio P_MR/P_BZ for different possible locations r/M. As the MOG parameter increases the curves of the power ratio shift left to smaller σ_0 for both locations, as seen in the right panel of Fig. <ref>. It can be seen that the shape of the ratio P_MR/P_BZ increases due to the combined effect of black hole charge and MOG parameter, thus resulting in increasing the efficiency of magnetic reconnection and allowing to mind out more energy from the black hole than for BZ.
§ CONCLUSIONS
Black holes are huge reservoirs of energy and due to this reason, the enormous luminosity of AGN is connected to the SMBHs anchored in the centres of galaxies. It is believed that the rotational energy is the most important aspect of astrophysical rotating black holes as their own reducible energy, as predicted by GR. It is therefore fundamentally important to understand more deeply this energy sourcing genesis of the highly powerful astrophysical phenomena occurring in the vicinity of a black hole.
To that proposed explanations for these highly powerful astrophysical phenomena comes into play in order to mine out black hole rotational energy by assuming the magnetic field existing on the surrounding environment of the black hole. One of them is Blandford and Znajek mechanism <cit.> that was used to extract the energy using an electromagnetic field existing around the black hole. It is envisaged by the fact that BZ mechanism utilizes twisting of magnetic field lines because of the frame dragging effect around a rotating black hole which creates a potential difference U and electric current I that comes from the discharging. As a consequence of this process, rotational energy W∼ IU could be mined out from a spinning black hole.
Another interesting mechanism is the magnetic Penrose process proposed to drive black hole rotational energy out <cit.>. This mechanism is also an extremely effective mechanism for extracting energy from rotating black holes through the influence of the magnetic field. This happens because the magnetic field plays an important key role to generate the Wald electric charge of the Kerr black hole which is acting as an accelerator for charged particles in the ergosphere.
The magnetic reconnection mechanism has been recently proposed as the most promising mechanism for extracting out the rotational energy of astrophysical black holes. This mechanism is surprisingly highly efficient, originally, for extracting out the rotational energy of standard Kerr black hole by the magnetic reconnection known as the source of magnetar's enormous energetics. It is driven by the fact that the magnetic reconnection burns magnetic field energy for the plasma acceleration through drastic reconfiguration of magnetic field lines due to the frame dragging effect of a rapidly spinning black hole.
In this paper, we explored the impact of the rotating Kerr-Newman-MOG black hole on the magnetic reconnection by adapting the recently developed Comisso-Asenjo mechanism <cit.> and studied the energy extraction by the magnetic reconnection process that occurs continuously inside the ergosphere due to black hole spin. We further analysed the energy efficiency of energy extraction and the power as a function of plasma magnetization, magnetic field orientation and black hole spin and MOG parameters by imposing all required conditions.
We explored the accelerated ϵ_+^∞>0 and decelerated ϵ_-^∞<0 energies of the plasma for the energy extraction through the magnetic reconnection demonstrated that the accelerated and decelerated energies per enthalpy grow as a consequence of combined effect of the MOG parameter and black hole charge, thus leading to arbitrarily high energy that can be extracted out extracted by the magnetic reconnection. We also studied the phase-space region (a, r/M) for satisfying the required condition for energy extraction. We found that the combined effect of black hole charge and MOG parameter can extend the phase space for the energy extraction condition which leads to arbitrarily high energy driven out from Kerr-Newman-MOG black hole via magnetic reconnection.
Further, we studied the power and the energy efficiency of a rapidly spinning Kerr-Newman-MOG black hole in order to understand how the magnetic reconnection is efficient as compared to the Kerr black hole case. We showed that the combined effect of black hole charge Q and MOG parameter α leads to high power through the magnetic reconnection as compared to the Kerr black hole. We also found that the combined effect of MOG parameter and black hole charge can play an increasingly important role in approaching high energy efficiency for energy extraction, i.e., the efficiency of energy extraction reaches up to its possible maximum due to the fact that the MOG parameter α as an attractive gravitational charge can physically manifest to strengthen black hole gravity which can lead to the spin parameter a>1 and to high energy efficiency via the magnetic reconnection.
Also, we estimated the rate of energy extraction under the fast magnetic reconnection by comparing the power of the magnetic reconnection and Blandford-Znajek mechanisms. For that, we analyzed the behaviour of P_MR/P_BZ against the plasma magnetization for various possible cases. We showed that the ratio P_MR/P_BZ increases as a consequence of the combined effect of black hole charge and MOG parameter. Hence, it suggests that the magnetic reconnection is significantly more efficient than for BZ and it results in increasing the efficiency as compared to the Kerr black hole.
From the present results, one can infer that the Kerr-Newman-MOG black hole's impact on the magnetic reconnection is significantly more efficient to extract the rotational energy from the black hole. In fact, magnetic reconnection is fueled by magnetic field energy due to the twisting of magnetic field lines around the black hole for plasma acceleration. In this regard, the MOG parameter can cause even more fast spin that strongly affects the reconfiguration of magnetic field lines due to the frame dragging effect. This is how the combined effect of black hole charge and MOG parameter makes the energy extraction surprisingly more efficient for the Kerr-Newman-MOG black hole as compared to Kerr black hole.
§ ACKNOWLEDGMENTS
We warmly thank Luca Comisso for valuable comments and discussions that definitely helped to improve the accuracy and quality of the presentation of the manuscript. We also thank Pankaj Sheoran for useful discussions. This work is supported by the National Natural Science Foundation of China under Grants No. 11675143 and No. 11975203, and the National Key Research and Development Program of China under Grant No. 2020YFC2201503. M.A. and B.A. wish to acknowledge the support from Research Grant F-FA-2021-432 of the Uzbekistan Agency for Innovative Development.
apsrev4-1
|
http://arxiv.org/abs/2307.00588v1
|
20230702150940
|
ChatGPT vs SBST: A Comparative Assessment of Unit Test Suite Generation
|
[
"Yutian Tang",
"Zhijie Liu",
"Zhichao Zhou",
"Xiapu Luo"
] |
cs.SE
|
[
"cs.SE"
] |
Journal of Class Files, Vol. 14, No. 8, August 2021
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
Recent advancements in large language models (LLMs) have demonstrated exceptional success in a wide range of general domain tasks, such as question answering and following instructions. Moreover, LLMs have shown potential in various software engineering applications. In this study, we present a systematic comparison of test suites generated by the ChatGPT LLM and the state-of-the-art SBST tool EvoSuite. Our comparison is based on several critical factors, including correctness, readability, code coverage, and bug detection capability. By highlighting the strengths and weaknesses of LLMs (specifically ChatGPT) in generating unit test cases compared to EvoSuite, this work provides valuable insights into the performance of LLMs in solving software engineering problems. Overall, our findings underscore the potential of LLMs in software engineering and pave the way for further research in this area.
ChatGPT, Search-based Software Testing, Large Language Models
ChatGPT vs SBST: A Comparative Assessment of Unit Test Suite Generation
Yutian Tang,
Zhijie Liu,
Zhichao Zhou, and
Xiapu Luo
Yutian Tang is with University of Glasgow, United Kingdom. E-mail: [email protected].
Zhijie Liu is with ShanghaiTech University, Shanghai 201210, China. E-mail: [email protected].
Zhichao Zhou is with ShanghaiTech University, Shanghai 201210, China. E-mail: [email protected].
Xiapu Luo is with the Department of Computing, Hong Kong Polytechnic University, Hong Kong SAR, China. E-mail: [email protected].
Yutian Tang ([email protected]) is the corresponding author.
============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Unit testing is a widely accepted approach to software testing that aims to validate the functionality of individual units within an application. By using unit tests, developers can detect bugs in the code during the early stages of the software development life cycle and prevent changes to the code from breaking existing functionalities, known as regression <cit.>. The primary objective of unit testing is to confirm that each unit of the software application performs as intended. This method of testing helps to improve the quality and reliability of software by identifying and resolving issues early on.
SBST. The importance of unit testing in software development and the software development life cycle cannot be overstated. To generate unit test cases, search-based software testing (SBST) <cit.> techniques are widely employed. SBST is a technique that employs search algorithms such as genetic algorithms and simulated annealing to create test cases. The objective of SBST is to utilize these kinds of algorithms to optimize the test suites, resulting in a set of test cases that provide extensive code coverage and effective detection of program defects. Compared to other testing techniques, SBST exhibits promising results in reducing the number of test cases while maintaining the same level of defect detection capability <cit.>. SBST has emerged as an effective approach to improving the quality and efficiency of software testing, providing a valuable tool for software developers to streamline the testing process.
Large Language Model and ChatGPT. Recently, Large language models (LLMs) have exhibited remarkable proficiency in processing and performing everyday tasks such as machine translation, question answering, summarization, and text generation with impressive accuracy <cit.>. These models possess nearly the same capacity as humans for understanding and generating human-like text. One such example of a real-world LLM application is OpenAI's GPT-3 (Generative Pretrained Transformer 3), which has been trained on an extensive amount of text data from the internet. Its practical implementation, ChatGPT [CharGPT: The version used in this study is GPT-3 instead of GPT-4], is widely employed in various daily activities, including text generation, language translation, question answering, and automated customer support. ChatGPT has become an essential tool for many individuals, simplifying various tasks and improving overall efficiency.
Deep-learning based Test Case Generation. Besides accomplishing daily tasks, such as text generation, language translation, and question answering, large language models are also been adopted and used to cope with software engineering (SE) tasks, such as, code generation <cit.>, code summarization <cit.>, document and comments generation <cit.>, and more. These models can be employed to generate unit test cases for programs with the help of a large number of real-world test cases written by developers/testers. This allows for the validation of the intended functionality of individual units within the software application. The integration of LLMs in SE tasks has demonstrated their versatility and potential for improving software development processes.
Motivation. Despite the SBST performing well in generating unit tests, there is still a learning cost for test personnel with limited experience. As a result, it can be a barrier to embracing SBST techniques, especially for fresh testers. However, the applications based on large language models can accomplish the same task (i.e., generating test suites) with nearly no learning costs. However, it is still unknown whether the unit tests generated by SBST can be compared with advanced artificial intelligence models and techniques. For example, whether the LLM-generated test cases are readable, understandable, reliable, and can be used in practice. Here, in this paper, we are interested in understanding the strengths and weaknesses of test suites generated by LLM. Specifically, we leverage the state-of-art GPT-3 <cit.> model's product ChatGTP <cit.> as a representative of LLM for comparison. More importantly, this paper intends to gain insights from two aspects: (1) we are keen on the knowledge we can learn from large language models to improve the state-of-art SBST techniques, and (2) we are also interested in uncovering the potential limitations of the existing large language models in generating test suite.
Our Study. To cope with the aforementioned challenges and achieve the goals, in this paper, we intend to answer the following research questions (RQ):
∙ RQ1 (Correctness): Are ChatGPT’s unit test suite suggestions correct?
∙ RQ2 (Readability): How understandable is the test suite provided by ChatGPT?
∙ RQ3 (Code Coverage): How does ChatGPT perform with SBST in terms of code coverage?
∙ RQ4 (Bug Detection): How effective are ChatGPT and SBST in generating test suites that detect bugs?
Contribution. In summary, we make the following contributions in this paper:
∙ In this paper, we conduct the first comparative assessment of LLMs and SBST in terms of generating unit test suites for programs in Java programming language;
∙ We systematically evaluate the test suites generated by ChatGPT from various aspects, including correctness, readability, code coverage, bug detection capability; and
∙ Our findings contribute to a better understanding of the potential for LLMs to improve software engineering practices, specifically in the domain of unit test generation.
§ BACKGROUND
SBST and Evosuite.
Search-based software testing (SBST) is a technique that formulates unit test generation as the optimization problem <cit.>. SBST regards code coverage as the test generation's target (e.g., branch coverage) and describes it as a fitness function to guide genetic algorithms <cit.>. The genetic algorithms evolve tests by iterating to (1) apply mutation and crossover operators to existing tests (i.e., the current generation) for new offspring tests and (2) form a new generation by selecting those with better fitness scores from the current generation and offspring. In our work, we choose the most mature SBST tool in Java, Evosuite <cit.>.
LLM and ChatGPT.
LLM is the type of biggest model in terms of parameter count, trained on enormous amounts of text data (e.g., human-like text, code, and so on) <cit.>. It is designed to process and understand input natural language text and to generate text consistent with the input, and shows a strong ability in natural language processing (NLP) tasks, such as, machine translation, question answering, text generation, and so on. ChatGPT <cit.> is now the most ideal LLM (i.e., adapt to human expression by using Instruct) <cit.> implemented atop GPT-3. GPT-3 <cit.> is constructed on multi-layer Transformer decoders <cit.> with 175 billion parameters, using few-shot learning (i.e., multiple examples and prompt). It shows performance similar to that of state-of-art fine-tuned systems in many tasks. One example of using GPT-3 is shown in Fig. <ref>. GPT-3 takes in the input text and infers the answer based on the task description, examples, and prompts in the input. To make LLM further align with users (humans), InstructGPT <cit.> utilizes additional supervised learning and reinforcement learning from human feedback to fine-tune GPT-3. ChatGPT <cit.> uses the same methods as InstructGPT and has the ability to answer follow-up questions.
For generating unit test cases, one can utilize a large language model like GPT-3. To generate new test cases given code snippets as input, the model can be fine-tuned on a dataset of code snippets and their accompanying test cases. One can also take advantage of ChatGPT's answering follow-up questions to generate more diverse test suites for given code snippets.
Using of ChatGPT. ChatGPT <cit.> can be used as follows. The software developer/tester (user) registers an account for ChatGPT. Then, users send a prompt (a text or a question) to ChatGPT. Then, ChatGPT will respond based on the information it has learned from its training data. Also, ChatGPT can be used in most software-engineering related tasks, such as, generating code, generating comments, and generating test cases. For example, as shown in Fig. <ref>, ChatGPT offers a basic user interface like a Chatbot, in which a user can ask any question in a natural language. As shown in Fig. <ref>, we ask ChatGPT how to make an HTTP request in Python, and ChatGPT shows a sample code written in Python with corresponding explanations. If a user is not satisfied with the generated responses, (s)he can ask ChatGPT to regenerate a response by clicking the “Regenerate a response” button at the bottom of the page.
§ COMPARATIVE ASSESSMENT SETUP
§.§ Data Collection
As for RQ1-3, to reduce bias in selecting subject code for generating test cases, we reuse the existing benchmark used in the existing study to evaluate the performance of Evosuite. Here, we use the benchmark presented in DynaMOSA (a.k.a Dynamic Many-Objective Sorting Algorithm) <cit.>. The benchmark contains 346 Java classes from 117 projects. The detailed class information can be founded in <cit.> and our artifact repository (Sec.<ref>). However, based on facts reported by other works <cit.>, some projects in the SF100 dataset can be obsolete and
are no longer maintained. Some projects are not able to build and compile as some classes required in DynaMOSA dataset are missing or not publicly available. As a result, we remove 38 projects and remain 79 projects with 248 Java classes. As for RQ4, we use the state-of-art defect database for Java-related research, which is Defects4J <cit.>. It contains 835 bugs from 17 open-source projects.
§.§ Using ChatGPT to Generate Unit Test Cases
With the help of ChatGPT, we are able to automatically generate unit test cases for programs. Unfortunately, there is no standard or oracle on how to use ChatGPT to automatically generate unit test cases with ChatGPT. Therefore, we adopt the following step to learn a reasonable practice of using ChatGPT to generate unit test cases:
∙ Step 1. Collecting existing tools that leverage LLM (e.g., ChatGPT) to automatically generate unit test cases from various sources, including Google, Google Scholar, GitHub, and technical blogs;
∙ Step 2. Analyzing the phrases and descriptions used in these tools to prompt LLM to generate test cases. This part involves analyzing source codes, reading blocks, and learning technical documents; and
∙ Step 3. Verifying the phrases and descriptions collected in Step 2 with ChatGPT to exclude invalid phrases and descriptions;
Through the Step 1-3, we obtain the following representative expressions that are able to generate unit test cases for a code segment:
∙ Expression 1: “Write a unit test for ${input}” with the code segment under test as the input;
∙ Expression 2: “Can you create unit tests using JUnit for ${input}?” with the code segment under test as the input;
∙ Expression 3: “Create a full test with test cases for the following Java code: ${input}?” with the code segment under test as the input;
Based on the above findings, we summarize our prompt as: “Write a JUnit test case to cover methods in the following code (one test case for each method): ${input}?” with the code segment under test as the input. Note that, to mimic the real-world practice, we do not intend to compare and evaluate the ChatGPT prompts to build a best-performed prompt. Instead, we only intend to build a reasonable prompt for ChatGPT to stimulate how developers use ChatGPT in a real-world environment.
§.§ Other Setups for the Study
∙ Setup for EvoSuite. EvoSuite provides many parameters
(e.g., crossover probability, population size <cit.>) to run the algorithms. In this paper, to evaluate and compare the performance between Evosuite and ChatGPT, we remain the default settings in Evosuite. As Evosuite leverages genetic algorithms in selecting and generating test cases, to reduce the bias introduced by randomness, we run 30 times for each class.
∙ Long Inputs for ChatGPT.
The maximum input length for ChatGPT is 2,048 tokens, which is roughly equivalent to 340-350 words. If the input submitted is too long, ChatGPT reports an error message and gives no response. In this case, we try to split the entire class by methods and ask ChatGPT to generate unit test cases for methods. However, splitting the entire class by methods to generate test cases cannot be a good practice as some information about the entire class cannot be perceived by ChatGPT. As a result, it hurts the quality of generated test cases. Here, we set the maximum length to be 4,096 tokens. That is, if the length of a class is larger than 4,096 tokens, we discard it.
∙ Environment. Experiments on EvoSuite are conducted on a machine with Intel(R) Core(TM) i9-10900 CPU @ 2.80GHz and 128 GB RAM.
§ EXPERIMENT AND EVALUATION
§.§ Correctness
RQ1: Are ChatGPT’s Unit Test Suite Suggestions Correct?
Motivation. The first and foremost thing we need to examine is whether ChatGPT can correctly return the test cases for testing the program/code segment given.
Methodology. To test whether the generated test cases are correct. We need to evaluate them from three aspects: (1) whether ChatGPT successfully returns the test case for each input under test; (2) whether these test cases can be compiled and executed; and (3) whether these test cases contain potential bugs. Specifically, for (2), it can be examined with the help of Java Virtual Machine (JVM). We compile and execute the test cases to see whether JVM reports errors. For (3), we rely on the state-of-art static analyzer, SpotBugs <cit.>, to scan the test cases generated by ChatGPT to find out whether these test cases contain potential bugs or vulnerabilities. SpotBugs <cit.> is the successor of FindBugs <cit.> (an abandoned project) and is an open-source static software analyzer, which can be used to capture bugs in a Java program. It supports more than 400 bug patterns and poor programming practices.
Results.
According to the Long-input setting in Sec. <ref>, we remove 41 classes and remain 207 Java classes from 75 projects.
We find that ChatGPT can successfully generate unit test cases for all 207 Java classes without reporting any errors. Among these test cases, there are 144 (69.6%) test cases can be successfully compiled and executed without needing extra-human efforts. Next, we ask two undergraduate students who have basic knowledge of Java programming to attempt to repair errors with the help of IntelliJ IDE <cit.>. For the rest 64 test cases, there are 3 test cases that cannot be directly fixed without the background knowledge of the target program, and 60 test cases can be repaired with the help of IDE. Specifically, the errors in 3 test cases fall into 3 categories: a) fail to implement an interface; b) fail to initiate an abstract class instance; c) try to initiate an instance of an inner class.
The errors in other 60 test cases fall into 7 categories as shown in Table. <ref>. Here, invoke undefined methods represents invoking a method, which is not defined in the target class. Table. <ref> shows some samples of invoking undefined method errors. The root cause for invoking undefined methods is that ChatGPT is only given the class under test instead of the entire project. As a result, ChatGPT has to predict the name of a callee when needed. This is especially the case when ChatGPT attempts to generate some Assertions. However, the results in Table. <ref> also surprise us that even if the ChatGPT fails to call the correct callees, its prediction also gives a strong clue to find the correct callee names. This is why we can fix these errors without the need of domain knowledge of these target projects. Fail to initiate an instance for an interface represents that ChatGPT creates an instance of an interface, but fails to override methods, and incorrect types represents that the types of arguments in callsites are incorrect.
To wrap up, the compiling errors made by ChatGPT are mainly due to that it fails to have an overview of the entire project. Thus, ChatGPT attempts to predict the callees' names, parameters, parameters' types, and so forth. As a result, compiling errors are introduced.
▹ For (3), we leverage the state-of-art static analyzer, SpotBugs, to scan the test cases generated by ChatGPT. As a result, SpotBugs report 403 potential bugs from 204 test cases (3 test cases fail to compile). The overview distribution is shown in Table. <ref>. On average, each case contains 1.97 bugs.
From the bug priority levels perspective, SpotBugs rank bugs' priority level into Scariest, Scary, Troubling, and Of Concern. Scariest level represents bugs are considered the most severe and potentially harmful to the overall functionality and security of the code; Scary level represents bugs are considered significant and could lead to issues if not fixed; Troubling level represents bugs are categorized as minor but could still cause issues if left unaddressed; and Of Concern level represents bugs are considered informational and generally pose minimal to no risk to the code's functionality or security. As shown in Table. <ref>, most bugs (85.11%) are with the Of Concern type. There are only 8 test cases (3.9%) that have Scariest level bugs.
From the bug patterns perspective, founded bugs fall into 7 categories: (1) Bad Practice; (2) Performance; (3) Correctness; (4)Multi-thread Correctness; (5) Dodgy Code; (6) Internationalization; and (7) Experimental. The detailed descriptions of each bug pattern can be found on the official documentation <cit.>. As shown in Table. <ref>, there are 21 test cases involved either in correctness bugs or multi-thread correctness bugs. These types of bugs represent appear coding mistakes, which normally belong to the Scariest or Scary priority level. As for Dodgy code pattern, which holds the largest proportion, it represents the code is confusing, anomalous, or written in a way that leads itself to errors. Example cases can be dead local stores, switch fall through, and unconfirmed casts. As for correctness/multi-thread correctness bugs, it mostly refers to the following 3 cases based on our results: null dereference, out-of-bounds array access, and unused variables.
In summary, from the bug priority levels and bug patterns, we can conclude that most (61.2%) ChatGPT-generated test cases are bug-free. Only 20 (9.8%) test cases are from the Scariest and Scary levels.
[boxrule=1pt,boxsep=1pt,left=2pt,right=2pt,top=2pt,bottom=2pt,title=Answer to RQ1: Correctness]
∙ Of the 207 Java test cases generated, 69.6% were compiled and executed without human intervention. However, 3 test cases were unfixable without understanding the target program and 60 could be fixed with an IDE.
∙ After analyzing the bug priority levels and bug patterns of ChatGPT-generated test cases, it can be inferred that a majority of these cases, specifically 61.2%, are free from any bugs. However, a small proportion of test cases, comprising only 9.8%, have been categorized under the Scariest and Scary levels, indicating the presence of severe issues.
§.§ Readability
RQ2: How Understandable is the Test Suite provided by ChatGPT?
Motivation. Analyzing the readability of ChatGPT-generated code is to make sure that human developers can easily maintain, comprehend, and modify it. This is crucial when ChatGPT-generated code will be maintained and changed over time by other developers or when it will be merged into already-existing codebases.
Methodology. For this RQ, we set up two sub-tasks: (1) code style checking; and (2) code understandability.
▹ To check code styles of generated test cases, we rely on the state-of-art software quality tool which supports Java: Checkstyle <cit.>, which is a development tool to check whether Java code adheres to a coding standard. It automates the process of checking Java code. Here, we leverage two standards (i.e., Sun Code Conventions <cit.>, Google Java Style <cit.>) with Checkstyle to check whether the ChatGPT generated test suite adheres to these standards.
▹ Dantas et al. <cit.> proposed cognitive complexity and cyclomatic complexity metrics for measuring the understandability of a code snippet. Cyclomatic complexity measures program complexity by counting independent paths in source code. It indicates code size, structure, and complexity, and helps find error-prone areas. Cognitive complexity is a metric that evaluates code complexity from a human perspective. It considers factors like code structure, naming, and indentation to determine how hard code is to understand. It helps developers gauge maintainability and modification difficulty and identifies complex or confusing code parts. Cyclomatic and cognitive complexity can be measured with the PMD IntelliJ plugin <cit.>. The details can be found on the project repository (Sec. <ref>).
Results. According to the Long-input setting in Sec. <ref>, we remove 41 classes and remain 204 Java classes from 75 projects.
∙ Code Style Checking Results.
▹ Checkstyle-Google: Fig. <ref> shows the boxplot of Google Codestyle violations for each class. It shows that the dataset has several outliers on the higher side, with a median value of approximately 70. The interquartile range (IQR) falls between around 30 to 175, indicating that most of the data lie within this range. However, the data is highly skewed to the right, with a few extreme data points on the higher side, indicating that the distribution is not normal. The minimum value is 4, and the maximum value is 1260, which shows a wide range of values in the dataset.
Next, The radar plot in Fig. <ref> breakdowns violation issues by types to display the details. As depicted in Fig. <ref>, we can conclude that:
∙ Indentation is the most common code style violation, indicating that ChatGPT may need to work on consistently formatting its code to improve readability and maintainability;
∙ FileTabCharacter and CustomImportOrder also appear to be frequent violations, which highlights the importance of proper configuration and consistency in code structure; and
∙ Violations related to code legibility and ease of reading, such as LineLength and AvoidStarImport should not be ignored to maintain a high standard of code quality.
▹ Checkstyle-SUN: Fig. <ref> shows the boxplot. The median value of the data is around 28, with 25% of the data falling below 15 and 75% falling below 55. There are several values above the upper quartile, indicating potential outliers or extreme values. The minimum value in the data is 3 and the maximum is 297. The IQR for the dataset is 40, indicating that most of the values in the dataset fall within this range.
Next, The radar plot in Fig. <ref> breakdowns violation issues by types to display the details. As depicted in Fig. <ref>, it appears that the two most common types of coding issues are MissingJavadocMethod and MagicNumber, with 2742 and 2498 occurrences respectively. The MissingJavadocMethod issue suggests that more documentation and explanations are required for ChatGPT. Furthermore, magic numbers in the test cases generated by ChatGPT are mainly used in the Assertions. Additionally, the figure shows that FinalParameters, RegexpSingleline, and AvoidStarImport also occur frequently, indicating that attention should be paid to these areas as well. Some of the less frequent issues, such as HiddenField and UnusedImports, may be less urgent but still worth addressing to improve overall code quality for ChatGPT.
In summary, as an AI language model, ChatGPT may not have a specific code style that it adheres to when generating test cases. However, the code style of the test cases can be influenced by the parameters and rules set for the generation process or the input that is given to the model. It also suggests that programmers should pay attention to the code style when using test cases generated by ChatGPT.
∙ Code Understanding. The default cyclomatic and cognitive complexity thresholds in PMD are 10 and 15, which means if the cyclomatic and cognitive complexities of a class/method are lower than these values, the system does not report the issue. Thus, we build a series of customized rules to measure complexity. The rule sets can be downloaded from our online repository. Note that, the complexity is measured on a method based.
▹ Cognitive Complexity: Based on the technical report from SonarSource <cit.>, Cognitive Complexity can be categorized into four categories: low (<5 cognitive complexity), moderate (6-10), high (11-20), and very high complexity (21+). As the results are shown in Table. <ref>, all methods are with low complexity.
▹ Cyclomatic Complexity: Based on the official documentation from PMD <cit.>, Cognitive Complexity can be categorized into four categories: low (1-4 cognitive complexity), moderate (5-7), high (8-10), and very high complexity (11+). As the results are shown in Table. <ref>, there are 3300 methods from 204 classes with low complexity and 2 methods from 2 classes with moderate complexity.
Therefore, based on the aforementioned results, we can conclude that the ChatGPT-generated test cases are overwhelmingly easy to follow and in low complexity.
[boxrule=1pt,boxsep=1pt,left=2pt,right=2pt,top=2pt,bottom=2pt,title=Answer to RQ2: Readability]
∙ Code Style-Google Rule The median value of approximately 70 (violations). The interquartile range (IQR) falls between around 30 to 175, indicating that most of the data lie within this range. Furthermore, Indentation is the most common code style violation;
∙ Code Style-SUN Rule The median value of the data is around 28 (violations), with 25% of the data falling below 15 and 75% falling below 55. The two most common types of coding issues are MissingJavadocMethod and MagicNumber, with 2742 and 2498 occurrences respectively; and
∙ Code Understanding From the cognitive complexity perspective, all methods are in low complexity. From the cyclomatic complexity perspective, almost all (3300 out of 3302) methods are in low complexity and the other 2 methods are in moderate complexity. Thus, the ChatGPT-generated test cases are overwhelmingly easy to follow and with low complexity.
§.§ Code Coverage
RQ3: How does ChatGPT perform with SBST in terms of code coverage?
Motivation. While low coverage implies that certain portions of the code have not been checked, high coverage shows that the produced tests have thoroughly evaluated the code. Comparing the code coverage between the test suite generated by ChatGPT and SBST allow us to evaluate and assess the ChatGPT-generated test suite.
Methodology. The JaCoCo <cit.> measures instruction and branch coverage. The instruction coverage relates to Java bytecode instructions and is thus analogous to statement coverage on source code. We use just instruction coverage (i.e., statement coverage (SC)) to evaluate code coverage as JaCoCo's definition of branch coverage counts only branching of conditional statements, nor edges in the control flow graph.
Results. According to the Long-input setting in Sec. <ref>, we remove 41 classes and remain 207 Java classes from 75 projects.
▹Statement Coverage (SC) Comparison. As we run 30 times for EvoSuite, we compute the maximum, minimum, average, and average standard deviation. Recall the result in RQ1, for the 3 ChatGPT-generated test cases, which failed to be fixed without the background knowledge, we regard their code coverage as 0 [Different from 204 test cases in other RQs, we have 207 test cases considered in this RQ.].
As shown in Table <ref> and <ref>, for Evosuite, on average, the maximum SC can reach 77.4% for all projects; the minimum SC can reach 70.6% for all projects; and the average SC can reach 74.2% for all projects. In contrast, for ChatGPT, on average, the average SC can reach 55.4% for all projects. In general, Evosuite outperforms ChatGPT 19.1% in regards to SC. Additionally, ChatGPT outperforms Evosuite in 10 out of 75 (13.33%) projects, which are highlighted in Table. <ref> and <ref>. From the class perspective, ChatGPT outperforms EvoSuite in 37 (17.87%) out of 207 classes.
Furthermore, by investing 37 cases that ChatGPT outperforms EvoSuite, we find that ChatGPT is highly adept at generating test cases for the following reasons:
1. ChatGPT can generate different String objects/integer/double values to use (e.g., comparison) with high diversity compared to Evosuite (Ref: guava::Objects, math::SimplexTableu);
2. ChatGPT can generate
an instance of Font for FontChooser, which is not applicable for Evosuite (Ref: 71_film2::FontChooserDialog);
3. ChatGPT can generate more reasonable and useable UI operations (i.e., ActionEvents) for testing UIs compared to Evosuite (Ref: 72_bcry::battlecryGUI);
4. ChatGPT can generate test cases or instances based on the existing information from the classes under tests (Ref: 45_lotus::Phase). Fig. <ref> shows a code segment from 45-lotus::Phase.java. This code segment also suggests some instances (e.g, UpkeepPhase(), DrawPhase(), Main1Phase()) are compatible with the type of Game.currentPhase. Such information can be correctly captured by ChatGPT and be used to generate diverse Phase instances. As a result, it can reach a high coverage than EvoSuite;
5. ChatGPT can generate more complex call chains for testing based on the semantics information collected from the classes under test compared to EvoSuite (Ref guava::Monitor). For example, the code segment in Fig. <ref>, ChatGPT can generate a more complex call chain rather than invoking a single method once. More importantly, its call chain is logically correct. That is, the method enter must be invoked before leave. This can benefit from that the LLM can precept semantic context from the code or identifiers.
6. ChatGPT can generate test data that is suitable for the target regarding the semantic context. For example, the input parameter for invoking the method setCountry (Ref: 21_geo-google::GeoStatusCode) can be any String. However, a real country name (e.g., United States) can be more suitable for testing the method setCountry compared to a random String.
Moreover, as the code complexity increases, so does the search space for identifying appropriate test cases, leading to longer execution times and greater computational expenses for SBST techniques. Consequently, this can pose a significant challenge in uncovering effective test cases that can ensure optimal code coverage and expose any defects.
Following the previous research works <cit.>, we adopt Vargha-Delaney Â_ab to evaluate
whether a particular approach (a) outperforms another (b). According to Vargha and Delaney Â_ab
<cit.>, negligible, small, medium, and large differences are indicated by A12 over 0.56, 0.64, 0.71, and 0.8, respectively.
▹All Classes Comparison. As shown in Table. <ref>, there 193 test cases fall into large and 14 test cases fall into negligible group. This indicates EvoSuite is overwhelmingly better than ChatGPT in reaching higher code coverage for most cases. The overall Vargha-Delaney measure for all classes is 0.71 (medium).
▹Small/Big Classes Comparison. Here, small classes are defined as classes with less than 50 branches. Classes with more than 50 branches are considered as big classes.
▹Big Classes Comparison. Table. <ref> shows the comparison for big classes. There 121 test cases fall into large and 5 test cases fall into negligible group. This indicates EvoSuite is overwhelmingly better than ChatGPT in reaching higher code coverage for big class cases. The overall Vargha-Delaney measure for all classes is 0.764 (large).
▹Small Classes Comparison. Table. <ref> shows the comparison for small classes. There 70 test cases fall into large and 11 test cases fall into negligible group. This indicates EvoSuite is overwhelmingly better than ChatGPT in reaching higher code coverage for big class cases. The overall Vargha-Delaney measure for all classes is 0.63 (small).
Unfortunately, we fail to see ChatGPT outperforms EvoSuite for even big classes. It indicates no matter the big or small classes, developers are suggested to turn to EvoSuite in order to obtain a higher code coverage. The potential causes may be diverse and varied. Some possible reasons can be: (1) incomplete specifications: ChatGPT is only given the classes under test instead of the entire project. Thus, without the information from the entire project, it can be hard for ChatGPT to generate more valuable test cases; (2) lack of feedback mechanisms: Unlike Evosuit, which can learn from feedback (i.e., cover data), ChatGPT relies solely on the training data. It makes it challenging for ChatGPT to comprehend the feedback from test results through an iterative process leading to low test coverage.
However, the results also suggest two insights:
⋆Insight 1: As an AI-powered assistant, ChatGPT has a strong capability in understanding precept semantics and context from the code under test. This means that ChatGPT can assist in generating test data effectively. By embedding an AI model or an NLP (Natural Language Processing) module within an SBST (Search-Based Software Testing) tool, ChatGPT can greatly improve the performance of the SBST tool. This is because the tool will be able to comprehend and interpret complicated code structures and generate test cases based on them with higher accuracy and efficiency. As a result, developers can benefit from faster, more efficient testing, and a more reliable software product; and
⋆Insight 2: Even though it cannot compare with EvoSuite, ChatGPT can still reach a relatively high code coverage (55.4%). Thus, ChatGPT can still serve as an entry-level tool for testing newcomers or as a backup option.
[boxrule=1pt,boxsep=1pt,left=2pt,right=2pt,top=2pt,bottom=2pt,title=Answer to RQ3: Code Coverage]
∙ For Evosuite, on average, the maximum SC can reach 77.4% for all projects; the minimum SC can reach 70.6%; and the average SC can reach 74.2%. In contrast, for ChatGPT, on average, the average SC can reach 55.4%;
∙ After examining 37 cases in which ChatGPT outperformed EvoSuite (in code coverage), our analysis suggests six potential scenarios where ChatGPT may be better suited. These findings contribute to a growing body of research exploring the efficacy of automated testing tools;
∙ The experimental results indicate EvoSuite is overwhelmingly better than ChatGPT in reaching higher code coverage for both big class cases and small class cases; and
∙ Two potential reasons for low code coverage can be: incomplete specifications; and lack of feedback mechanisms.
§.§ Bug Detection
RQ4: How effective are ChatGPT and SBST in generating test suites that detect bugs?
Motivation. The main use of generated test suites is finding buddy code in a program. Therefore, in this RQ, we evaluate the effectiveness of generated test suite in detecting bugs.
Methodology. To evaluate the effectiveness of generated test suite in terms of detecting bugs, we first generate unit test suites for the target classes and examine whether the test suite can successfully capture the bug in the Defects4J benchmark. Note that, in this RQ, for fairness, we only run EvoSuite once to generate test cases.
Results.
Some bugs in the Defects4J are logical bugs, which are triggered with Assertions. Unfortunately, we find that sometimes the Assertions generated by ChatGPT are not reliable. For example, Fig. <ref> illustrates a test case for Period in Time project. The Assertion statement assertEquals(1000, p.getMillis(); is incorrect. However, the code segment under test is not buggy and the expected value should be 0 instead of 1000. ChatGPT makes an incorrect Assertion for this case. It means we cannot fully rely on the Assertions in ChatGPT-generated test cases to determine whether the bugs are successfully triggered. However, manually checking the Assertions in ChatGPT-generated test cases can be effort-consuming and error-prone <cit.>. Therefore, in this RQ, we focus on bugs that associate with Java Exceptions, such as NullPointerException, UnsupportedOperationException.
Table. <ref> shows the experimental results. In the table, for each project, the higher values (e.g., higher code coverage) are highlighted in comparison between the two approaches. Furthermore, out of 212 bugs, 44 were successfully detected by test cases generated by ChatGPT, with an average statement code coverage of 50%. In contrast, test cases generated by EvoSuite successfully detected 55 bugs, with an average statement code coverage of 67%. From the comparison, we can also see that in some cases, EvoSuite detected more bugs than ChatGPT, while in other cases, ChatGPT detected more bugs than EvoSuite. For example, in the Chart project, EvoSuite had a higher coverage rate for bug detection than ChatGPT, but ChatGPT detected more bugs than EvoSuite in some cases. It is worth noting that the coverage rates for both tools varied greatly across different projects, indicating that the effectiveness of each tool may depend on the specific characteristics of the project being tested. It is interesting to note that ChatGPT was able to detect bugs in some cases where EvoSuite was not, indicating that the two tools may complement each other and could be used together to improve bug detection.
By comparing the test cases generated by ChatGPT and EvoSuite, we find several possible reasons that LLM (e.g., ChatGPT) may not outperform Evosuite:
∙ As the input for the ChatGPT can only the class under test instead of the entire project (e.g. jar file), it can be hard for ChatGPT generate complex instances, which can make the test cases to generate corner cases to explore bugs;
∙ As a large language model, ChatGPT generates/predicts content takes a prompt or starting text as input, and uses its learned understanding of language to predict what words or phrases should come next. This prediction is based on the probability that a certain sequence of words would appear in the dataset. It is highly possible that a commonly used case (i.e., test case/data in our context) holds a higher probability compared to an edge case; and
∙ By adopting the genetic algorithm to explore potential test suites capable of achieving higher code coverage, Evosuite may theoretically possess a greater probability of discovering bugs. Notably, such a feedback mechanism is presently absent in LLMs, such as ChatGPT, underscoring the potential benefits of combining SBST techniques with LLMs for program testing and bug detection.
It is also worth mentioning that the results presented do not reflect the capability of ChatGPT in finding or locating bugs. It only implicates the bug detection capability of ChatGPT-generated test cases.
[boxrule=1pt,boxsep=1pt,left=2pt,right=2pt,top=2pt,bottom=2pt,title=Answer to RQ4: Defects and Bug Detection]
∙ The test cases generated by ChatGPT can be misleading in finding logical-related bugs, as the Assertions generated can be incorrect and unreliable;
∙ Out of 212 bugs, 44 were successfully detected by test cases generated by ChatGPT, with an average statement code coverage of 50%. In contrast, test cases generated by EvoSuite successfully detected 55 bugs, with an average statement code coverage of 67%;
∙ Evosuite integrates a genetic algorithm to find test cases that can provide better code coverage and increase the chances of finding bugs. LLM tools like ChatGPT do not have this feedback mechanism. Thus, combining the SBST technique and LLM can improve software testing accuracy and bug detection.
§ LIMITATIONS, AND THREATS TO VALIDITY
§.§ Limitations
The results and experiments of this study is limited in two parts: (1) Given the need of manually query ChatGPT, our study is limited to only the queries made for the study. As ChatGPT is a closed-source and we cannot map our results to the details or characteristics of ChatGPT’s internal model. We also do not know ChatGPT’s exact training data, which means we cannot determine if the exact response to our queries are members of the training data; and (2) As ChatGPT is continuously updating and training, the responses of ChatGPT can only reflect the performance of ChatGPT at the time we conduct our work (i.e., ChatGPT Jan 30 (2023) Version).
§.§ Threats to Validity
To reduce bias by manually selecting subject programs for testing, we reuse the benchmarks (i.e., Defects4J, DyanMOSA Dataset), which have been used and studied in the existing researches. Furthermore, we also reuse the metrics presented in existing research works to calculate the code coverage, code readability and so forth. Another threat to internal validity comes from the randomness of the genetic algorithms. To reduce the risk, we repeat EvoSuite for 30 times for every class. As for external validity, due to size of the benchmarks, we do not attempt to generalize our results and conclusions.
§ RELATED WORK
Language Models. Language models are used in NLP for many tasks, such as, machine translation, question answering, summarization, text generation and so on <cit.>. To better understand language, models with massive parameters are trained on an extremely large corpus (i.e., LLM). Transformer <cit.> is constructed on stacked encoders and decoders. It leverages self-attention mechanism to weigh the importance of words in the input text, capturing long-range dependencies and relationships between words in the input. It is the base for many LLMs. ELMo <cit.> utilizes multi-layer bidirectional LSTM and provides high-quality word representations. GPT <cit.> and BERT <cit.> are built on the decoders (unidirectional) and encoders (bidirectional) of Transformer, respectively, using pre-training and fine-tuning techniques. GPT-2 <cit.> and GPT-3 <cit.> are the descendants of GPT. GPT-2 has a larger model size than GPT, and GPT-3 is larger than GPT-2. Moreover, with larger corpus, GPT-2 and GPT-3 introduce zero-shot and few-shot learning to make models adapt to Multitask. Codex <cit.> is obtained by training GPT-3 using Github code data. It is the model that powers GitHub Copilot <cit.>, a tool generating computer code automatically. InstructGPT <cit.> utilizes additional supervised learning and reinforcement learning from human feedback to fine-tune GPT-3, aligning LLM with users. ChatGPT <cit.> uses the same methods as InstructGPT and has the ability to answer follow-up questions.
Search-based Software Testing. SBST approaches test case generation as an optimization problem. The first SBST method to produce test data for functions with float-type inputs was put out by Miller et al. <cit.>. Many software testing methods <cit.> have made extensive use of SBST approaches. Most studies concentrate on (1) Search algorithms: Tonella <cit.> suggested iterating to generate one test case for each branch. A test suite for all branches was suggested by Fraser et al. <cit.>. Many-objective optimization techniques were presented by Panichella et al. <cit.>. To lower the expenses of computing, Grano et al. <cit.> developed a variation of DynaMOSA; (2) Enhancing fitness gradients: Arcuri et al. introduced testability transformations into API tests <cit.> For programs with complicated inputs. Lin et al. <cit.> suggested an approach to deal with the inter-procedural flag issue. A test seed synthesis method was suggested by Lin et al. to produce complicated test inputs <cit.>. Braione et al. <cit.> coupled symbolic execution and SBST; (3) Design of the fitness function: Xu et al. <cit.> suggested an adaptive fitness function for enhancing SBST; Rojas et al. <cit.> suggested combining multiple coverage criteria for fulfilling more requirements from developers. Gregory Gay experimented with various criterion combinations <cit.> to compare the usefulness of multi-criteria suites for spotting practical flaws. Zhou et al. <cit.> proposed a method to select coverage goals from multiple criteria instead of combining all goals; (4) Readability of created tests: Daka et al. <cit.> suggested naming tests by stating covered goals. Deep learning techniques were presented by Roy et al. <cit.>; (5) Applying SBST to more software fields such as Machine Learning libraries <cit.>, Android applications <cit.>, Web APIs <cit.>, and Deep Neural Networks <cit.>.
§ CONCLUSION
In this article, we present a systematic assessment of unit test suites generated by two state-of-the-art techniques: ChatGPT and SBST. We comprehensively evaluate test suites generated by ChatGPT from multiple critical perspectives, including correctness, readability, code coverage, and bug detection capability. Our experimental results demonstrate that (1) 69.6% of the ChatGPT-generated test cases can be successfully compiled and executed; (2) We also observed that the most common violations in the generated code style were Indentation (for Google Style) and MissingJavadocMethod (for SUN Style), while the majority of the test cases exhibited low complexity; (3) Moreover, our evaluation revealed that EvoSuite outperforms ChatGPT in terms of code coverage by 19%; and (4) EvoSuite outperforms ChatGPT in terms of code coverage by 5%.
§ DATA AVAILABILITY
The experimental results and raw data are available at: <https://sites.google.com/view/chatgpt-sbst>
IEEEtran
10
url@samestyle
Zhu:1997
H. Zhu, P. A. V. Hall, and J. H. R. May, “Software unit test coverage and
adequacy,” ACM Comput. Surv., vol. 29, no. 4, p. 366–427, 1997.
Mark:12
M. Harman, S. A. Mansouri, and Y. Zhang, “Search-based software engineering:
Trends, techniques and applications,” ACM Computing Surveys (CSUR),
vol. 45, no. 1, pp. 1–61, 2012.
Fraser:13
G. Fraser and A. Arcuri, “Whole test suite generation,” IEEE
Transactions on Software Engineering, vol. 39, no. 2, pp. 276–291, 2013.
Zhou:23
Z. Zhou, Y. Zhou, C. Fang, Z. Chen, and Y. Tang, “Selectively combining
multiple coverage goals in search-based unit test generation,” in 37th
IEEE/ACM International Conference on Automated Software Engineering, 2022,
pp. 1–12.
Carlini:21
N. Carlini, F. Tramer, E. Wallace, M. Jagielski, A. Herbert-Voss, K. Lee,
A. Roberts, T. B. Brown, D. Song, U. Erlingsson et al., “Extracting
training data from large language models.” in USENIX Security
Symposium, vol. 6, 2021.
Brants:07
T. Brants, A. C. Popat, P. Xu, F. J. Och, and J. Dean, “Large language models
in machine translation,” 2007.
Raffel:22
C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou,
W. Li, and P. J. Liu, “Exploring the limits of transfer learning with a
unified text-to-text transformer,” J. Mach. Learn. Res., vol. 21,
no. 1, 2022.
Svyatkovskiy:20
A. Svyatkovskiy, S. K. Deng, S. Fu, and N. Sundaresan, “Intellicode compose:
Code generation using transformer,” in Proc. of ESEC/FSE, 2020, p.
1433–1443.
Alon:20
U. Alon, R. Sadaka, O. Levy, and E. Yahav, “Structural language models for
any-code generation,” 2019.
Poesia:22
G. Poesia, A. Polozov, V. Le, A. Tiwari, G. Soares, C. Meek, and S. Gulwani,
“Synchromesh: Reliable code generation from pre-trained language models,”
in Proc. of ICLR, 2022.
McBurney:16
P. W. McBurney and C. McMillan, “Automatic source code summarization of
context for java methods,” IEEE Transactions on Software Engineering,
vol. 42, no. 2, pp. 103–119, 2016.
Haiduc:10
S. Haiduc, J. Aponte, and A. Marcus, “Supporting program comprehension with
source code summarization,” in Proc. of ICSE, 2010, p. 223–226.
Zhang:20ICSE
J. Zhang, X. Wang, H. Zhang, H. Sun, and X. Liu, “Retrieval-based neural
source code summarization,” in Proc. of ICSE, 2020, p. 1385–1397.
McBurney:14
P. W. McBurney and C. McMillan, “Automatic documentation generation via source
code summarization of method context,” in Proc. of ICPC, 2014, p.
279–290.
Hu:18
X. Hu, G. Li, X. Xia, D. Lo, and Z. Jin, “Deep code comment generation,” in
Proc. of ICPC, 2018, p. 200–210.
Brown:20
T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal,
A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss,
G. Krueger, T. Henighan, R. Child, A. Ramesh, D. Ziegler, J. Wu, C. Winter,
C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark,
C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei, “Language
models are few-shot learners,” in Advances in Neural Information
Processing Systems, vol. 33, 2020, pp. 1877–1901.
ChatGPT
OpenAI, “Chatgpt: Optimizing language models for dialogue,” 2023,
https://openai.com/blog/chatgpt/.
Tonella:04
P. Tonella, “Evolutionary testing of classes,” in Proc. of ISSTA,
2004, p. 119–128.
Panichella:15
A. Panichella, F. M. Kifetew, and P. Tonella, “Reformulating branch coverage
as a many-objective optimization problem,” in Proc. of ICST, 2015,
pp. 1–10.
Panichella:18
——, “Automated test case generation as a many-objective optimisation
problem with dynamic selection of the targets,” IEEE Transactions on
Software Engineering, vol. 44, no. 2, pp. 122–158, 2018.
Fraser:11
G. Fraser and A. Arcuri, “Evosuite: Automatic test suite generation for
object-oriented software,” in Proc. of ESEC/FSE, 2011, p. 416–419.
Vaswani:17
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez,
Ł. Kaiser, and I. Polosukhin, “Attention is all you need,”
Advances in neural information processing systems, vol. 30, 2017.
Devlin:18
J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep
bidirectional transformers for language understanding,” arXiv preprint
arXiv:1810.04805, 2018.
Raffel:20
C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou,
W. Li, and P. J. Liu, “Exploring the limits of transfer learning with a
unified text-to-text transformer,” The Journal of Machine Learning
Research, vol. 21, no. 1, pp. 5485–5551, 2020.
Ouyang:22
L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. L. Wainwright, P. Mishkin, C. Zhang,
S. Agarwal, K. Slama, A. Ray et al., “Training language models to
follow instructions with human feedback,” arXiv preprint
arXiv:2203.02155, 2022.
Artetxe:22
M. Artetxe, J. Du, N. Goyal, L. Zettlemoyer, and V. Stoyanov, “On the role of
bidirectionality in language model pre-training,” arXiv preprint
arXiv:2205.11726, 2022.
radford:19
A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever et al.,
“Language models are unsupervised multitask learners,” OpenAI blog,
vol. 1, no. 8, p. 9, 2019.
radford:18
A. Radford, K. Narasimhan, T. Salimans, I. Sutskever et al., “Improving
language understanding by generative pre-training,” 2018.
Lin:21
Y. Lin, Y. S. Ong, J. Sun, G. Fraser, and J. S. Dong, “Graph-based seed object
synthesis for search-based unit testing,” in Proc. of ESEC/FSE, 2021,
p. 1068–1080.
Defects4J
Defects4J, “Defects4j: A database of real faults and an experimental
infrastructure to enable controlled experiments in software engineering
research,” 2023, https://github.com/rjust/defects4j.
Evosuite
Evosuite, “Evosuite: Automatic test suite generation for java,” 2023,
https://www.evosuite.org/.
SpotBugs
SpotBugs, “Spotbugs,” 2023, https://spotbugs.github.io/index.html.
Findbugs
B. Pugh and D. Hovemeyer, “Findbugs,” 2023,
https://findbugs.sourceforge.net/.
Ayewah:08
N. Ayewah, W. Pugh, D. Hovemeyer, J. D. Morgenthaler, and J. Penix, “Using
static analysis to find bugs,” IEEE Software, vol. 25, no. 5, pp.
22–29, 2008.
IntelliJ
JetBrain, “Intellij idea – the leading java and kotlin ide,” 2023,
https://www.jetbrains.com/idea/.
SpotBugsDesc
Spotbugs, “Spotbug bug descriptions,” 2023,
https://spotbugs.readthedocs.io/en/stable/bugDescriptions.html.
CheckStyle
CheckStyle, “Checkstyle,” 2023, https://checkstyle.sourceforge.io/.
SUNJavaStyle
Oracle, “Code conventions for the java programming language,” 1999,
https://www.oracle.com/java/technologies/javase/codeconventions-contents.html.
GoogleJavaStyle
Google, “Google java style guide,” 2023,
https://google.github.io/styleguide/javaguide.html.
Dantas:21
C. E. C. Dantas and M. A. Maia, “Readability and understandability scores for
snippet assessment: an exploratory study,” arXiv preprint
arXiv:2108.09181, 2021.
PMD
P. S. C. Analyzer, “Pmd,” 2023, https://pmd.github.io/.
CognitiveComputing
sonarsource, “Cognitive computing: A new way of measuring understandability,”
2021, https://www.sonarsource.com/docs/CognitiveComplexity.pdf.
JaCoCo
M. G. . C. KG, “Jacoco java code coverage library,” 2023,
https://www.jacoco.org/jacoco/.
Vargha:20
A. Vargha and H. D. Delaney, “A critique and improvement of the cl common
language effect size statistics of mcgraw and wong,” Journal of
Educational and Behavioral Statistics, vol. 25, no. 2, pp. 101–132, 2000.
Rojas:2015
J. M. Rojas, J. Campos, M. Vivanti, G. Fraser, and A. Arcuri, “Combining
multiple coverage criteria in search-based unit test generation,” in
Search-Based Software Engineering: 7th International Symposium, 2015,
pp. 93–108.
Kavir:2011
K. Shrestha and M. J. Rutherford, “An empirical evaluation of assertions as
oracles,” in 2011 Fourth IEEE International Conference on Software
Testing, Verification and Validation, 2011, pp. 110–119.
Gunel:21
G. Jahangirova, D. Clark, M. Harman, and P. Tonella, “An empirical validation
of oracle improvement,” IEEE Transactions on Software Engineering,
vol. 47, no. 8, pp. 1708–1728, 2021.
Valerio:21
V. Terragni, G. Jahangirova, P. Tonella, and M. Pezzè, “Gassert: A fully
automated tool to improve assertion oracles,” in 2021 IEEE/ACM 43rd
International Conference on Software Engineering: Companion Proceedings
(ICSE-Companion), 2021, pp. 85–88.
Lan:19
Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut, “Albert: A
lite bert for self-supervised learning of language representations,”
arXiv preprint arXiv:1909.11942, 2019.
zhang:20
Y. Zhang, S. Sun, M. Galley, Y.-C. Chen, C. Brockett, X. Gao, J. Gao, J. Liu,
and B. Dolan, “DIALOGPT : Large-scale generative pre-training for
conversational response generation,” in Proc. of ACL, 2020.
Pilault:20
J. Pilault, R. Li, S. Subramanian, and C. Pal, “On extractive and abstractive
neural document summarization with transformer language models,” in
Proc. of EMNLP, 2020, pp. 9308–9319.
Cai:21
X. Cai, S. Liu, J. Han, L. Yang, Z. Liu, and T. Liu, “Chestxraybert: A
pretrained language model for chest radiology report summarization,”
IEEE Transactions on Multimedia, pp. 845 – 855, 2021.
Khashabi:20
D. Khashabi, S. Min, T. Khot, A. Sabharwal, O. Tafjord, P. Clark, and
H. Hajishirzi, “Unifiedqa: Crossing format boundaries with a single qa
system,” arXiv preprint arXiv:2005.00700, 2020.
Cho:14
K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares,
H. Schwenk, and Y. Bengio, “Learning phrase representations using rnn
encoder-decoder for statistical machine translation,” arXiv preprint
arXiv:1406.1078, 2014.
Chen:21
M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. d. O. Pinto, J. Kaplan, H. Edwards,
Y. Burda, N. Joseph, G. Brockman et al., “Evaluating large language
models trained on code,” arXiv preprint arXiv:2107.03374, 2021.
Bui:21
N. D. Bui, Y. Yu, and L. Jiang, “Infercode: Self-supervised learning of code
representations by predicting subtrees,” in Proc. of ICSE.1em
plus 0.5em minus 0.4emIEEE, 2021, pp. 1186–1197.
Peters:18
M. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and
L. Zettlemoyer, “Deep contextualized word representations. arxiv 2018,”
arXiv preprint arXiv:1802.05365, vol. 12, 2018.
Copilot
G. Copilot, “Your ai pair programmer,” 2023,
https://github.com/features/copilot/.
Miller:76
W. Miller and D. L. Spooner, “Automatic generation of floating-point test
data,” IEEE Transactions on Software Engineering, no. 3, pp.
223–226, 1976.
Li:07
Z. Li, M. Harman, and R. M. Hierons, “Search algorithms for regression test
case prioritization,” IEEE Transactions on Software Engineering,
vol. 33, no. 4, pp. 225–237, 2007.
Silva:17
R. A. Silva, S. d. R. S. de Souza, and P. S. L. de Souza, “A systematic review
on search based mutation testing,” Information and Software
Technology, vol. 81, pp. 19–35, 2017.
Walcott:06
K. R. Walcott, M. L. Soffa, G. M. Kapfhammer, and R. S. Roos, “Time-aware test
suite prioritization,” in Proc. of ISSTA, 2006, pp. 1–12.
Grano:19
G. Grano, C. Laaber, A. Panichella, and S. Panichella, “Testing with fewer
resources: An adaptive approach to performance-aware test case generation,”
IEEE Transactions on Software Engineering, vol. 47, no. 11, pp.
2332–2347, 2019.
Arcuri:21
A. Arcuri and J. P. Galeotti, “Enhancing search-based testing with testability
transformations for existing apis,” ACM Transactions on Software
Engineering and Methodology, vol. 31, no. 1, pp. 1–34, 2021.
Lin:20
Y. Lin, J. Sun, G. Fraser, Z. Xiu, T. Liu, and J. S. Dong, “Recovering fitness
gradients for interprocedural boolean flags in search-based testing,” in
Proc. of ISSTA, 2020, pp. 440–451.
Pietro:17
P. Braione, G. Denaro, A. Mattavelli, and M. Pezzè, “Combining symbolic
execution and search-based testing for programs with complex heap inputs,”
in Proc. of ISSTA, 2017, pp. 90–101.
Xu:17
X. Xu, Z. Zhu, and L. Jiao, “An adaptive fitness function based on branch
hardness for search based testing,” in Proc. of GECCO, 2017, pp.
1335–1342.
Miguel:15
J. M. Rojas, J. Campos, M. Vivanti, G. Fraser, and A. Arcuri, “Combining
multiple coverage criteria in search-based unit test generation,” in
Search-Based Software Engineering, M. Barros and Y. Labiche, Eds.,
2015, pp. 93–108.
Gay:17
G. Gay, “Generating effective test suites by combining coverage criteria,” in
Search Based Software Engineering, 2017, pp. 65–82.
Daka:17
E. Daka, J. M. Rojas, and G. Fraser, “Generating unit tests with descriptive
names or: Would you name your children thing1 and thing2?” in Proc. of
ISSTA, 2017, pp. 57–67.
Roy:20
D. Roy, Z. Zhang, M. Ma, V. Arnaoudova, A. Panichella, S. Panichella,
D. Gonzalez, and M. Mirakhorli, “Deeptc-enhancer: Improving the readability
of automatically generated tests,” in Proc. of ASE, 2020, pp.
287–298.
Wang:21
S. Wang, N. Shrestha, A. K. Subburaman, J. Wang, M. Wei, and N. Nagappan,
“Automatic unit test generation for machine learning libraries: How far are
we?” in Proc. of ICSE, 2021, pp. 1548–1560.
Dong:2020
Z. Dong, M. Böhme, L. Cojocaru, and A. Roychoudhury, “Time-travel testing
of android apps,” in Proc. of ICSE, 2020, pp. 481–492.
Martin:2021
A. Martin-Lopez, S. Segura, and A. Ruiz-Cortés, “Restest: automated
black-box testing of restful web apis,” in Proc. of ISSTA, 2021, pp.
682–685.
Haq:2021
F. U. Haq, D. Shin, L. C. Briand, T. Stifter, and J. Wang, “Automatic test
suite generation for key-points detection dnns using many-objective search
(experience paper),” in Proc. of ISSTA, 2021, pp. 91–102.
|
http://arxiv.org/abs/2307.01880v1
|
20230704185307
|
A note on inner amenability for FLC point sets
|
[
"Gabriel Favre"
] |
math.OA
|
[
"math.OA",
"43A07, 37A55"
] |
Inner amenability is a bridge between amenability of an object and amenability of its operator algebras. It is an open problem of Ananantharman-Delaroche to decide whether all étale groupoids are inner amenable. Approximate lattices and their dynamics have recently attracted increased attention and have been studied using groupoid methods. In this note, we prove that groupoids associated with approximate lattices in second countable locally compact groups are inner amenable. In fact we show that this result holds more generally for point sets of finite local complexity in such groups.
Vanishing Bach-Like Tensors on Complete Gradient Shrinking Ricci Solitons
James Siene
=========================================================================
MSC classification: 43A07;37A55.
Keywords: étale groupoids, inner amenability, irregular point sets.
§ INTRODUCTION
For locally compact groups, inner amenability takes its root in the work of Losert and Rindler <cit.> and was defined by Paterson in <cit.> as the existence of a Haar continuous conjugation invariant mean. It was shown by Lau-Paterson in <cit.> that a locally compact group is amenable if and only if it is both inner amenable and its von Neumann algebra is injective. Losert and Rindler showed in <cit.> that the class of inner amenable groups contains [IN]-groups and that inner amenability is in fact equivalent to having a conjugation invariant neighborhood of the identity for connected groups. More generally, locally compact groups which have a nuclear reduced C^*-algebra are inner amenable if and only if they are amenable. This follows from a result of Lau-Paterson (see <cit.>). In particular, inner amenable type I groups or almost connected groups are amenable. This follows from Connes in <cit.> for the almost connected case.
It has to be said that a different notion of inner amenability has been introduced for discrete groups by Effros in <cit.> requiring atomlessness of means. It was motivated by its connections[See also <cit.> for the comparison between property Γ and Effros' notion of inner amenability] with property Γ of the group's von Neumann algebra. In the present note, we do not require atomlessness of means. It makes discrete groups automatically inner amenable, as the Dirac measure at the identity is then a Haar continuous conjugation invariant mean.
The natural step of investigating generalizations of inner amenability beyond locally compact groups has been overtaken by Anantharaman-Delaroche. She has defined inner amenability for transformation groupoids in <cit.> and more generally for locally compact groupoids in <cit.>. Inner amenability of a locally compact groupoid can be roughly defined as the existence of a net of properly supported, continuous, positive type functions on the square of the groupoid which uniformly converges to 1 on compact subsets of the diagonal. Her notion of inner amenability has been shown to generalize the classical one for locally compact groups by Crann and Tanko in <cit.>. As in the group case, inner amenability provides a strong connection between topological amenability of a groupoid and amenability of its operator algebras. Indeed, Anantharaman-Delaroche showed in <cit.> that amenable transformation groupoids are precisely inner amenable transformation groupoids whose underlying action is also amenable. Another major result highlighting the role of inner amenability for étale groupoids is the equivalence between topological amenability, amenability at infinity and nuclearity of the reduced C^*-algebra for inner amenable étale groupoids.
The class of inner amenable groupoids has been shown to contain all transformation groupoids arising from discrete group actions. This led Anantharaman-Delaroche to ask the following natural question.
<cit.>
Are all étale groupoids are inner amenable?
It needs to be emphasized that outside of amenable groupoids and transformation groupoids coming from discrete group actions, very little is known about which étale groupoids are inner amenable. Our main result provides a class of example of inner amenable groupoids which are not constructed as transformation groupoids.
The class of examples of étale groupoids which we study in this note comes from the study of point sets in locally compact second countable groups. In <cit.>, Björklund-Hartnick have developed the notion of approximate lattices and initiated their study. Approximate lattices in locally compact groups simultaneously generalize lattices in locally compact groups and Meyer sets in abelian locally compact groups. Analytical properties of approximate lattices and their consequences have recently received attention, see <cit.> and <cit.>.
Approximate lattices are the main motivating example of point sets in locally compact groups which satisfy a regularity condition known as finite local complexity. They provide a natural source of étale groupoids whose construction can be sketched the following way. Given a point set in a second countable locally compact group, consider the action of the group on the Chabauty-Fell closure of the orbit of the point set. Restricting the obtained transformation groupoid to the elements of the Chabauty-Fell closure of the orbit containing the identity of the group yields an étale groupoid when the point set is uniformly discrete. The groupoid obtained through this construction is called the groupoid of the point set. The main result of this paper shows that this groupoid is inner amenable, provided the point set is regular enough.
thm:intro
Let Λ⊆ G be a point set of finite local complexity in a locally compact group. Then the groupoid of Λ is inner amenable.
These groupoids appeared first in <cit.> for point sets in ^n where the authors use this groupoid description and K-theoretic information to extract information on the underlying dynamical system. The groupoids associated to point sets in general locally compact second countable groups have been defined in greater generality in <cit.> where they have been used in order to obtain results in frame theory. As a consequence of thm:intro, all approximate lattices even in non-amenable groups give rise to inner amenable groupoids.
§ POINT SETS AND THEIR ASSOCIATED GROUPOIDS
In this section, we recall the necessary notions on point sets in locally compact groups and topological groupoids. The groupoid associated to point sets in locally compact groups can be found in <cit.>.
§.§ Point sets and their discrete hull
In this section, we introduce point sets in locally compact groups and an important dynamical object associated to each point set: its discrete hull.
Let G be a second countable locally compact group and consider the space of all closed subsets of G with the Chabauty-Fell topology given by the following subbasis of topology
O_K ={C∈| C∩ K=∅}
O_U ={C∈| C∩ U≠∅},
where U ranges over all the open subsets of G and K ranges over all the compact subsets of G. It is a fact that the space is compact and that it is second countable, if G is second countable. The topology of is conveniently described in terms the following convergence.
<cit.>
Let P_n,P∈ for each n ∈. Then (P_n)_n converges to P if and only if both of the
following statements hold:
* For all x∈ P there exists x_n∈ P_n such that (x_n)_n converges to x;
* For all increasing sequences of integers (n_k)_k and x_n_k∈ P_n_k such that (x_n_k)_k converges to some x∈ G, we have x∈ P.
The group G acts continuously on by left translation. One defines for Λ∈ the set
Ω(Λ):={gΛ| g∈ G}⊆
of the closure of all the translates of Λ in which is called the hull of Λ. As a closed subspace of , it is compact. Consider the subset
:={P∈Ω(Λ)| e∈ P}
of elements of the hull containing the identity, called the discrete hull of Λ. It is also compact, since it is closed in Ω(Λ). Before introducing the groupoid associated to point sets, let us recall the necessary groupoid terminology.
§.§ The groupoid of a point set
In this section, we first fix some general notations for topological groupoids and then explain how to associate groupoids to point sets. See <cit.> for a comprehensive treatment on general groupoid terminology.
A (topological) groupoid is a topological space together with a distinguished set of units , continuous range and source maps r,s: → a continuous inversion map γ→γ^-1 from to itself and a multiplication map (γ,η)→γη from
^(2):={(γ,η)∈^2| s(γ)=r(η)}
to satisfying the following formulas.
* r(γ)=γγ^-1, s(γ)=γ^-1γ for all γ∈;
* (γη)μ=γ(ημ) for all (γ,η),(η,μ)∈^(2);
* (γ^-1)^-1=γ for all γ∈;
* r(γ)γ=γ s(γ)=γ for all γ∈.
In this note, groupoids will always be locally compact and Hausdorff. A bisection of a groupoid is a subset U ⊆ such that the restrictions s|_U and r|_U are injective. A groupoid is étale if its topology has a basis consisting of open bisections. It is called ample if its topology has a basis consisting of compact open bisections. Equivalently, an étale groupoid is ample if (and only if) its unit space is totally disconnected.
If is a groupoid and A ⊆ is a set of units, one obtains a groupoid by restricting the unit space of to A denoted by |_A = {g ∈| d(g), r(g) ∈ A}.
We now introduce a groupoid construction associated to a point set in a locally compact group. We use the group action on the hull to get a transformation groupoid, and then restrict its unit space to the discrete hull. A definition and study of groupoids associated with general point sets in locally compact groups can be found in <cit.>.
For a point set Λ⊆ G, the group G acts on the hull Ω(Λ) and one can form a transformation groupoid G⋉Ω(Λ). By restricting the unit space of G⋉Ω(Λ) to , one obtains a groupoid which will be denoted by =G⋊Ω(Λ)|_. The unit space of is . For γ=(x,P)∈, the source, range and inverse are given by s(x,P)=P, r(x,P)=xP and (x,P^-1)=(x^-1,xP). The multiplication in is given by (y,xP)(x,P)=(xy,P). The groupoid has the following description which will be repeatedly used in the sequel.
={(x,P)∈ G×| x^-1∈ P}.
Note that the topology of can be described by the convergence of sequences, since is second countable.
§.§ Regularity notions for point sets
In this section, we recall the notions of uniform discreteness and finite local complexity of point sets and describe the impact either property has on the associated groupoid.
A point set Λ in a locally compact group G is called uniformly discrete if there exists a symmetric open neighborhood U of the identity such that for all P∈Ω(Λ), we have
| P ∩ U|≤ 1.
If such a symmetric open neighborhood U of the identity exists, we say that Λ is U-discrete.
It is not hard to show that we it is enough to check the condition (<ref>) on translates of the point set (see <cit.>). For uniformly discrete point sets, there is an explicit basis for the topology of consisting of open bisections described by the following proposition.
prop:udiscrete-implies-etale<cit.>
Let Λ⊆ G be U_0-discrete. Let V be a symmetric open neighborhood of the identity with VV⊂ U_0, W an open subset of and x∈ G. The set
U_x,V,W=((xV∩ Vx)× W)∩
is an open bisection. Further, the collection of all such sets form a basis of the topology of and in particular, is an étale groupoid.
In fact, uniformly discrete point sets are precisely the point sets Λ whose associated groupoid is étale (see <cit.>). Now we introduce another standard regularity notion.
A point set Λ⊆ G is said to have finite local complexity (FLC) if for any compact K⊆ G, there exists a finite collection of finite subsets ⊆ 2^G such that for any P∈, we have
K∩ P=hF,
for some h∈ G and F∈.
It is enough to check the condition (<ref>) on translates of the point set. It is a classical fact that a point set has finite local complexity if and only if Λ^-1Λ is uniformly discrete, which is in turn equivalent to the existence a finite set F⊆ G such that Λ^-1Λ⊆ FΛ (see <cit.>). Applying the condition (<ref>) to a small compact neighborhood of the identity, it is not hard to see that the FLC condition implies uniform discreteness.
Let Λ be an FLC point set in a second countable locally compact group. Then the discrete hull Ω_0(Λ) is totally disconnected.
Fix a compact set K ⊆ G and denote by ⊆ 2^G the finite collection of finite subsets of G given by the FLC condition applied to K. For any P∈, we have K∩ P=hF, for some h∈ G and F∈. The sets
A_F,K:={P∈|∃ h∈ G K∩ P = hF}
are clearly disjoint and closed for every F∈ℱ. Since the collection {A_F,K}_F∈ covers , each A_F,K is also open. To finish the proof, we argue that for any P,Q∈ with P≠ Q, we can find a compact set K⊆ G and F∈ such that P∈ A_F,K and Q∉A_F_K.
Choose two such sets P,Q∈ and without loss of generality let x∈ P∖ Q. Pick a compact set K⊆ G such that K∩ P = {e,x} and x^-1∉K. By applying the FLC condition to K we find a finite family of finite subsets of G and two elements g,h∈ G such that K∩ P=gF and K∩ Q=hF' for some F,F'∈. Since e∈ Q and x∉K, there is an element y∈ Q such that K∩ Q={e,y}. Assume towards a contradiction that F=F'. Then, there is an equality of sets
F= {g^-1,g^-1x}={h^-1,h^-1y}.
Since x∉Q, then x≠ y which forces g^-1=h^-1y and h^-1=g^-1x. But then g^-1=h^-1y=g^-1xy. This implies x=y^-1, which is a contradiction since y=x^-1∉K by assumption on K. Hence P∈ A_F,K and Q∉A_F,K, as desired.
Note also that the groupoid is ample when Λ is FLC, since it is both étale and has a totally disconnected unit space.
ex:lattice
Let Λ be a lattice in a locally compact group G. Then Λ is FLC since Λ^-1Λ=Λ is uniformly discrete. The groupoid 𝒢(Λ) can be described the following way. Let P∈. There exists a sequence {λ_n}_n∈⊆Λ such that P=limλ_n^-1Λ. Since λ_n^-1Λ=Λ for all n∈, we conclude that P=lim_nΛ=Λ. This shows that ={Λ}. One can further identify Ω(Λ)∖{∅} with G/Λ as a G-space. Putting everything together, one obtains:
={(x,P)∈ G×Ω(Λ)| x^-1∈ P and xP,P∈}
={(x,gΛ)∈ G× G/Λ| gΛ=xgΛ = Λ}
={(x,Λ)∈ G×{Λ}| x∈Λ}=Λ.
Hence, in the case where Λ is a lattice, the groupoid construction recovers Λ. For general point sets, the G-space Ω(Λ)∖{∅} appears to be a good replacement of G/Λ. Recall that a lattice Λ in a group G is a discrete subgroup such that the G-space G/Γ has a finite invariant measure. By relaxing the subgroup condition and replacing G/Λ by Ω(Λ), one obtains the notion of strong approximate lattice (see <cit.>).
ex:Meyer-set
Let G and H be second countable locally compact groups and call π_G and π_H the projections from G× H onto G and H respectively. Let Γ⊆ G× H be a lattice such that π_G is injective when restricted to Γ and such that π_H(Γ) is dense in H. Let W be a symmetric compact neighborhood of the identity in H. The set
Λ=Λ(Γ,W) = π_G((G× W)∩Γ)⊆ G
is called a model set. Note that
Λ^-1Λ⊆π_G((G× W^-1W)∩Γ),
and the latter is uniformly discrete. Hence Λ is FLC.
§ INNER AMENABILITY OF GROUPOIDS ASSOCIATED WITH POINT SETS
In this section, we introduce inner amenability and prove the main result of this note (see thm:inner-amenability-FLC), namely that groupoids associated with FLC point sets in second countable locally compact groups are inner amenable.
§.§ Inner amenable groupoids
We will summarize <cit.> which presents the current state of the art concerning inner amenability of locally compact second countable groupoids. The original definition and idea of inner amenability come from the following equivalence for locally compact groups in <cit.>. A locally compact G admits a conjugation invariant mean on L^∞(G) if and only if there is a net ξ_i of positive functions on C_0(G) in the unit ball of L^2(G) such that the matrix coefficients ⟨ξ_i,λ_sρ_sξ_i⟩ converge uniformly on compact subsets of G to 1. A locally compact group satisfying this property is called inner amenable. Denoting by f_i the matrix coefficients
f_i:G× G→:(s,t)↦⟨ξ_i,λ_sρ_tξ_i⟩,
one sees that f_i is a net of continuous, positive definite functions converging uniformly to 1 on the diagonal of G× G such that (f_i)∩(G× K) and (f_i)∩ (K× G) is compact for every i and every K⊆ G compact. In fact (see <cit.>) the existence of such a net of continuous functions is equivalent to the existence of a conjugation invariant mean on L^∞(G). Note that the existence of Dirac measures at the identity witnesses inner amenability for discrete groups. This definition of inner amenability in terms of positive type functions has been generalized to the framework of groupoids. The study of inner amenability for groupoids associated with point sets in locally compact groups is the content of this piece.
We now introduce the appropriate notion of inner amenability for groupoids (see <cit.>). For a groupoid , a function f on × is properly supported if for all compacts K⊆, the sets (f)∩ (K×) and (f)∩ (× K) are compact. A function f is said to be of positive type if for all units x,y∈^(0), and γ_1,…,γ_n∈^x, η_1,…,η_n∈^y, the matrix (f(γ_i^-1γ_i,η_j^-1η_j))_i,j is positive definite.
<cit.>
A groupoid is inner amenable if for all compact K⊆ and ϵ>0, one can find a positive type, continuous, properly supported function f on × with | f(γ,γ)-1|<ϵ for all γ∈ K.
The existence of such functions for amenable locally compact groupoids is classical, see <cit.>. So amenable groupoids are inner amenable. When the groupoid is a discrete group, and (λ×ρ,ℓ^2()) denotes the product of left and right regular representations on ℓ^2() then the matrix coefficient associated with δ_e∈ L^2() in the representation λ×ρ witnesses inner amenability of . Recall that a continuous map between groupoids which respects the source, target and multiplication is called a groupoid morphism. A groupoid morphism ρ:ℋ→ is locally proper if the map
ψ = ρ× s× r : →ℋ××
g ↦ ((ρ(g),s(g),r(g))),
is proper. An example of such a map is the inclusion map of a closed subgroupoid (see <cit.>). The main tool to find examples of inner amenable groupoids is the following proposition.
<cit.>prop:locally-proper
Let and ℋ be locally compact groupoids. Assume there exists a locally proper groupoid morphism ρ:ℋ→. If is inner amenable, then so is ℋ.
Using the inclusion map of closed subgroupoids, the fact that closed subgroupoids of inner amenable groupoids are inner amenable can be easily deduced. Another consequence of prop:locally-proper is that étale transformation groupoids are inner amenable. Indeed, if a discrete group Γ acts on a space X, then the projection map X⋊Γ→Γ is locally proper and Γ is inner amenable because it is a discrete group, hence the inner amenability of X⋊Γ. After establishing this list of properties, Anantharaman-Delaroche asks whether all étale groupoids are inner amenable (see <cit.>). The groupoid attached to a point set in a locally compact group is an example of an inner amenable étale groupoid, which is not a transformation groupoid known to be inner amenable, unless the point set is a lattice or the group is inner amenable.
§.§ Inner amenability for FLC point sets
In this section, we prove that groupoids attached to point sets with finite local complexity in second countable locally compact groups are inner amenable. In what follows, we fix a locally compact second countable topological group G and an FLC point set Λ⊆ G. Recall that Λ is in particular uniformly discrete and we will denote by U⊆ G an open set with respect to which Λ is U-discrete.
The following lemma will be central to the construction of continuous functions from groupoids associated with FLC point sets.
lem:FLC-discretehull
Let Λ⊆ G be an FLC point set in a locally compact group. Then the set X=⋃_P∈P is discrete.
It is enough to show that for every compact set K⊆ G, K∩ X is finite. Let K be such a set. We may assume that the identity element e of G is in K, otherwise we may replace K by a bigger compact set K' in G containing K and e. Let ⊆(G) be a finite collection of finite subsets of G given by the FLC condition applied to K. In other words, for every P∈, there exists h∈ G and F∈ such that P∩ K=hF. We claim that
X∩ K⊆⋃_F∈,
h∈ F^-1hF,
which is finite. Let x∈ X∩ K. Then there exists P∈ such that x∈ P∩ K. Let h∈ G and F∈ such that P∩ K=hF. Since e∈ P∩ K, one has h^-1∈ F, hence h∈ F^-1. As a consequence,
x∈ hF⊆ F^-1F⊆⋃_F∈,
h∈ F^-1hF,
as desired.
We state and prove the main result of this note now.
thm:inner-amenability-FLC
Let Λ⊆ G be an FLC point set in a locally compact second countable group G. Then the groupoid (Λ) is inner amenable.
Let δ_e:G→{0,1} denote the Dirac function at e∈ G defined by δ_e(g)=1 if and only if g=e, and 0 otherwise. We claim that the function
ψ:(Λ)×(Λ) →
((x,P),(y,Q)) ↦δ_e(x^-1y)
is continuous, positive type, properly supported and satisfies ψ(γ,γ)=1 for any γ∈. In particular, ψ witnesses the inner amenability of .
We first check that ψ has proper support, let K⊆(Λ) be compact. Since every compact set in is a closed subset of {(x,P)∈| x∈ C} for some C⊆ G compact, we may assume that K is of this form.
Then
(ψ)∩(K×)⊆ ((C×)∩)×((C×)∩),
which is compact. A similar argument also shows that (ψ)∩(× K) is compact.
Further, observe that
ψ(γ,γ)=δ_e(x^-1x)=δ_e(e)=1,
for any γ=(x,P)∈.
Now, we show that ψ is positive type. Let P,Q∈Ω_0(Λ)=(Λ)^(0) be units, γ_1,…,γ_n∈^P and η_1,…,η_n∈^Q. Say γ_1=(x_1,x_1^-1P),…,γ_n=(x_n,x_n^-1P) and η_1=(y_1,y_1^-1Q),…,η_n=(y_n,y_n^-1Q) for some x_1,…,x_n∈ P and y_1,…,y_n∈ Q. Let M be the following matrix.
(M_ij)_ij=(ψ(γ_i^-1γ_j,η_i^-1η_j))_i,j=(δ_e(x_j^-1x_iy_i^-1y_j))_i,j∈ M_n({0,1}).
The matrix M is symmetric since M_ij=1 if and only if x_j^-1x_i=y_j^-1y_i which is equivalent to x_i^-1x_j=y_i^-1y_j which is in turn equivalent to M_ji=1. Note also that M_ik=1 if M_ij=M_jk=1. Indeed, the latter condition implies that both x_j^-1x_i=y_j^-1y_i and x_k^-1x_j=y_k^-1y_j hold. Multiplying the two equations, we obtain x_k^-1x_i=y_k^-1y_i which implies M_ik=1. As a consequence, up to reordering the columns and rows of M, the matrix M is of the form M=diag(M_1,…,M_k), where M_0 only entries are all 0, M_k only entries are all 0 for k=1,… n and l_0+l_1+⋯+l_k=n. In other words, the matrix M is a sum of (multiples of) orthogonal projections, hence it is positive.
Lastly, we check the continuity of ψ. Denote by X the space ⋃_P∈P. Denoting by π:→ X the projection on the first coordinate, we get that ψ=ψ∘ (π×π), for some map ψ:X× X→. Since X is discrete by lem:FLC-discretehull, the map ψ is continuous. Hence ψ is continuous.
The groupoids and G⋉Ω^×(Λ) are equivalent (see <cit.>). Hence, we obtain the following consequence as an application of thm:inner-amenability-FLC together with <cit.>.
cor:inner-amenability
Let Λ be an FLC point set in a locally compact second countable group G. The following are equivalent.
* The groupoid (Λ) is (topologically) amenable;
* the action G↷Ω(Λ) is amenable;
One of the applications of thm:inner-amenability-FLC is establishing that the groupoids associated with approximate lattices are inner amenable, even in non-amenable groups. In particular, the groupoids associated to model sets (see ex:Meyer-set) are inner amenable. An interesting class of examples of model sets arise from lattices in arithmetic groups.
Consider the embedding
ι:[√(2)] →×
a+b√(2) ↦ (a+b√(2),a-√(2)).
Using this embedding, we view SL_n([√(2)]) as an irreducible lattice in SL_n()× SL_n(). If W is any symmetric compact neighborhood of the identity in SL_n() and π_1:SL_n()× SL_n()→ SL_n() denotes the projection on the first coordinate, then the model set
Λ=π_1(SL_n([√(2)])∩ (SL_n()× W))
is FLC (see Example <ref>). As a consequence of thm:inner-amenability-FLC, the groupoid is inner amenable. Observe also that there is an SL_n()-equivariant inclusion ι:(SL_n())→(SL_n()×Ω(Λ)) induced by the projection SL_n()×Ω(Λ)→ SL_n(). If was amenable, then by cor:inner-amenability there would exists a G-invariant mean on (SL_n()×Ω(Λ)). Precomposing that mean with the inclusion ι would give an SL_n()-invariant (for the left translation action) mean on (SL_n()) which would imply the amenability of SL_n(). Hence the groupoid is not amenable.
One can replace the number 2 by any square free positive integer equal to 2 or 3 modulo 4 in the Example <ref>. More generally, one can consider any totally real Galois extension F of degree n over ℚ. The analogue of Example <ref> holds provided [√(2)] is replaced by the ring of integers 𝒪_F of F and the embedding ι is replaced by
ι:𝒪_F →^n
z ↦ (σ_1(z),…,σ_n(z)),
where σ_1,…,σ_n are real embeddings of F over ℚ.
|
http://arxiv.org/abs/2307.00314v1
|
20230701120317
|
Detection of River Sandbank for Sand Mining with the Presence of Other High Mineral Content Regions Using Multi-spectral Images
|
[
"Jit Mukherjee"
] |
cs.CV
|
[
"cs.CV"
] |
Joint Downlink-Uplink Beamforming for Wireless Multi-Antenna Federated Learning
Chong Zhang^⋆, Min Dong^†, Ben Liang^⋆, Ali Afana^, Yahia Ahmed^
^⋆Dept. of Electrical and Computer Engineering, University of Toronto, Canada, ^Ericsson Canada, Canada
^†Dept. of Electrical, Computer and Software Engineering, Ontario Tech University, CanadaThis work was funded in part by Ericsson Canada and by the Natural Sciences and Engineering Research Council (NSERC) of Canada.
============================================================================================================================================================================================================================================================================================================================================================================================================
Sand mining is a booming industry. The river sandbank is one of the primary sources of sand mining.
Detection of potential river sandbank regions for sand mining directly impacts the economy, society, and environment.
In the past, semi-supervised and supervised techniques have been used to detect mining regions including sand mining.
A few techniques employ multi-modal analysis combining different modalities such as multi-spectral imaging, synthetic aperture radar (SAR) imaging, aerial images, and point cloud data.
However, the distinguishing spectral characteristics of river sandbank regions are yet to be fully explored.
This paper provides a novel method to detect river sandbank regions for sand mining using multi-spectral images without any labeled data over the seasons.
Association with a river stream and the abundance of minerals are the most prominent features of such a region.
The proposed work uses these distinguishing features to determine the spectral signature of a river sandbank region, which is robust to other high mineral abundance regions.
It follows a two-step approach, where first, potential high mineral regions are detected and next, they are segregated using the presence of a river stream.
The proposed technique provides average accuracy, precision, and recall of 90.75%, 85.47%, and 73.5%, respectively over the seasons from Landsat 8 images without using any labeled dataset.
*Jit Mukherjee, [email protected]
§ INTRODUCTION
River sand is a widely used construction material with high economic and ecological value.
Rapid urbanization in several developing countries such as India, has increased its requirement leading to uncontrolled and illicit mining impacting the environment.
Excavation of sand from river sandbank has a few advantages as it can avoid flood inundation to a certain degree.
However, the abundance of sand mining and illicit, unmonitored sand quarries create disastrous circumstances due to their effects on river morphology, riverbed stability, change in the water table, erosion, surrounding eco-system, and habitation <cit.>.
As, in the natural process, sand formation takes over a few hundred years, monitoring and detection of potential sand mining regions have several high impacts on the environment.
Earlier, geodetic surveys, which are tedious, are used for monitoring and measurement of excavated sands.
Satellite imaging is found to be instrumental for land class detection and monitoring remotely.
In recent advances, different modalities of remote sensing are used in various aspects of mining including sand mining <cit.>.
§.§ Related Works
Remote sensing applications in sand mining have multifold research challenges such as detection of illicit mining, monitoring, volumetric control, etc. <cit.>.
In the literature,
multimodal analysis is employed for such applications mostly using labelled data.
In <cit.>, a multi-modal analysis composed of multi-spectral and interferometric synthetic aperture radar (InSAR) images is used to quantify ex-sand mining areas and related land deformation.
The study is divided into two parts viz. detection of sand mining regions
by the top of atmosphere (TOA) reflectance
using Landsat 5 and 8 images and thereafter, computation of the digital elevation model for quantification of land deformation.
Delineation of the Tasseled cap transformation's scatter plot is studied
for the identification of ex-sand mine areas
<cit.>.
Integration of differential global positioning systems and unmanned aerial vehicles (UAV) has been considered to monitor and quantify sand quarry volumes <cit.>.
Mining regions are separated through coordinate points and images using a supervised deep learning technique in <cit.>.
These techniques focus more on detecting mining regions using a marked dataset rather than defining the distinguishing characteristics of a sand mining region.
There is a significant research gap to identify river trails and sandbank regions using multi-spectral images exclusively.
Moreover, most of these techniques does not focus on the surrounding regions of a sand mine.
Detection of river trails can be found instrumental to detect such regions.
Digital elevation model (DEM) data has been studied to detect drainage and different topographic patterns <cit.>.
An accurate DEM generation requires prerequisites of a short temporal baseline, suitable perpendicular baseline, and atmospheric conditions.
Processed DEM images such as advanced spaceborne thermal emission and reflection radiometer (ASTER), and shuttle radar topography mission (SRTM) may not be acquired in regular intervals in most of the cases.
Multi-spectral images, e.g., Landsat 8, are obtained in a regular interval which is crucial for monitoring such a dynamic land class.
Spectral indexes are widely used in remote sensing such as normalized difference vegetation index (NDVI), normalized difference water index (NDWI) <cit.>, bare soil index (BI), etc.
In a few works, characteristics of spectral indexes such as bare soil index (BI) have been exploited to identify high mineral content regions such as river sandbanks <cit.>.
Hydrothermally altered mineral assemblages near mine water bodies have been mapped using clay mineral and iron oxide ratio <cit.>.
However, the performances of these indexes in sand mining regions are yet to be fully explored.
§.§ Objectives
Semi-supervised or supervised detection of mining land classes has the prerequisite of a labeled dataset.
Moreover, a few of them employ multi-modal analysis and DEM data, which can be difficult to avail in certain scenarios.
Distinguishing spectral characteristics for the detection of river sandbank regions through geophysical and spectral indexes are yet to be explored.
In the literature, the detection of river sandbank regions with the presence of other regions with high mineral abundance has not been thoroughly discussed.
Thus, the objective of this paper is to detect river sandbank regions without any labeled dataset by investigating the spectral characteristics of river sandbank and their surroundings using multi-spectral images exclusively.
Furthermore, it aims to provide a technique robust to seasonal variation and other land classes with high mineral abundance.
The proposed technique can be found beneficial in different aspects of the environmental monitoring and sand mining industry.
§ BACKGROUND TECHNIQUES
Background techniques, which are used in the proposed method are described below.
§.§ Modified Normalized Difference Water Index
Many techniques and indexes are proposed to detect water bodies, such as Normalized Difference Water Index (NDWI), modified normalized difference water index (MNDWI), Automated Water Extraction Index (AWEI) <cit.>, etc.
NDWI is defined as λ_NIR-λ_SWIR-I/λ_NIR+λ_SWIR-I.
Another variant of NDWI is also proposed as λ_Green-λ_NIR/λ_Green+λ_NIR <cit.>.
In this work, modified normalized difference water index (MNDWI) has been used because it enhances the open water features and remove build-up noises <cit.>.
MNDWI is proposed as a spectral index of green and short wave infra-red band i.e. λ_Green-λ_SWIR-I/λ_Green+λ_SWIR-I.
Higher values of MNDWI enhance water body regions.
Here, λ_Green, λ_NIR, and λ_SWIR-I are defined as the spectral reflectance values of green, near infra-red, and short wave infra-red one bands, respectively.
§.§ Coal Mine Index (CMI)
Coal mine index (CMI) is defined as a spectral ratio of short wave infra-red band one and short wave infra-red band two, i.e λ_SWIR-I-λ_SWIR-II/λ_SWIR-I+λ_SWIR-II.
Here, λ_SWIR-I, and λ_SWIR-II are defined as the reflectance of short wave infra-red bands one and two, respectively.
Lower values of CMI have been used to detect surface coal mining regions in <cit.>.
It has been observed that regions of high mineral contents can be detected using a threshold over CMI <cit.>.
§.§ Connected Component Labeling
Connected component labeling or analysis is an image processing technique to detect connected regions mostly in binary images.
It can be also applied for higher-dimensional data.
In connected component analysis, the whole image is scanned for each unmarked pixel.
In this scanning process, it gathers all similar pixels based on pixels connectivity.
§.§ Morphological Dilation & Image Thinning
Morphological dilation and erosion are the two basic morphological operations in image processing.
In dilation, a structural element is used to probe and expand the shapes in an image.
It increases the size of an object and fills holes.
Areas, which are separated by space smaller than the structural element, get connected using this technique.
Morphological dilation is represented as A⊕ B, where A is an image object and B is a structuring element.
Image thinning, a morphological operation, is applied on binary images to
obtain the skeleton structures of different shapes by removing black foreground pixels layer by layer.
Image thinning is used here for the skeletonization of detected water bodies.
Skeletonization is the process of creating a thinned version of a shape, which is equidistant to the boundaries.
As image thinning uses a hit-or-miss transform, it preserves the shape topology.
The algorithm proposed in <cit.> has been used here for thinning the outcome of the dilation.
§ METHODOLOGY
Sand mining on river sandbanks occurs within the proximity of a river.
This characteristic is thematically used in the proposed work to detect probable river sandbank regions with high mineral abundance.
The proposed technique consists of two segments as shown in Fig. <ref>.
First, the probable river regions are detected.
Next, regions with high mineral quantity including river sandbank regions are marked.
Further, the regions with high mineral abundance, which are close to a river, are separated from other high mineral abundance regions [A version of code and a few associated data can be accessed at https://github.com/MitJukherjee/River_Sandbank].
§.§ Detection of Probable River Regions
In this work, the modified normalized difference vegetation index (MNDWI, λ_Green-λ_SWIR-I/λ_Green+λ_SWIR-I)
is used to detect water bodies as it enhances the open water features and removes build-up noises <cit.>.
As a river region with a prominent river sandbank can get narrow, spectral indexes may not detect them from mid-resolution satellite images.
Hence, a morphological dilation is applied to fill such gaps along the trail of rivers over the outcome of MNDWI.
In dilation, a structural element is used to probe and expand the shapes in an image such that
areas, which are separated by space smaller than the structural element, get connected.
MNDWI detects different classes of water bodies including rivers and lakes.
Water bodies, such as lakes, and swamps, rarely have high mineral content regions.
However, the surrounding of mine swamps has different minerals.
Therefore, rivers are needed to be separated.
To separate rivers from other water bodies, the shapes of these water bodies are studied next.
Image thinning <cit.>, which is exerted to obtain the skeleton structures of different shapes by removing black foreground pixels layer by layer, is applied to the dilated outcome.
The thinned versions of rivers are found to be different than other water bodies as rivers are stretched over a larger region, unlike other water bodies.
To separate them, a connected component analysis is applied to the thinned image.
The larger connected components are detected as probable river regions.
Let these regions be denoted as R_w.
This segment is shown in Fig. <ref> using the grey boxes.
§.§ Detection of River Sandbank Regions
The coal mine index (CMI), enhances the concept of clay mineral ratio, which detects regions with high mineral-assemblages.
Lower values of CMI detect different land classes of surface coal mine regions <cit.>.
Other high mineral content regions, such as river sandbanks, can also be detected using a threshold over CMI <cit.>.
CMI<0 has a higher probability to detect surface coal mine regions <cit.>.
Hence, a thresholding operation is used
to detect probable river sandbank regions
as shown in Eqn <ref>.
f(ϕ) = {[ ProbableRiverSandbank, if CMI < τ_1 && CMI > τ_2; NonRiverSandBank, otherwise ].
Here, τ_1, the threshold value, is chosen empirically within [0.08-0.11] and τ_2 is chosen as 0.
by observing the distribution of CMI values in River sandbank regions over the seasons.
Detected regions have river sandbank regions along with a few other regions with high mineral content, which have near similar CMI values.
Let these regions be denoted as R_m.
The regions of R_m are analyzed individually using connected component analysis as
shown in Fig. <ref> (blue boxes).
Over each connected component, a bounding box is computed with five pixels padding on each side to study the surrounding areas.
The surrounding area of a river sandbank region is a river, unlike other detected regions.
Therefore, each bounding box is checked whether its surrounding regions have any region, which belongs to R_w.
These regions are considered as the detected river sandbank regions.
The proposed technique can detect river sandbank regions using only three bands of Green, SWIR-I, and SWIR-II. It is applicable to any satellite commodity, which has these three bands.
§ DATA AND STUDY AREA
In this work, the river sandbanks of the Damodar river beside the Jharia Coal Field (JCF) region are chosen as the study area as shown in Fig. <ref>.
The JCF is in the Dhanbad district of Jharkhand state in India between latitudes
23^∘38' N and 23^∘50' N and longitudes 86^∘07'E and 86^∘30'E.
This region has a humid subtropical climate with monsoon rain and
a vast variety of land classes such as reserve forests, dry croplands, freshwater bodies, riverbanks, rivers, grassland, and urban areas.
Land classes associated with coal mining regions, particularly coal overburden dumps have similar spectral properties with river sandbanks <cit.>.
They both have an abundance of minerals
and are difficult to separate using geophysical indexes.
Hence, this region has been chosen as the study area as it has prominent river sandbanks and coal mining regions both.
Landsat 8 L1 images (Path 140, Rows 43 and 44 as per Landsat Reference System), which produce top of atmosphere (TOA) reflectance, in 2017 are used here.
Landsat 8 provides nine multi-spectral bands and two thermal bands.
These bands except the panchromatic band have 30 meter spatial resolution.
These images are radiometrically corrected, and orthorectified.
As multi-spectral images are inapplicable with the presence of clouds, images with less than 10% cloud cover are chosen.
Most of the rainfall in JCF is concentrated from June to the end of September.
Thus, the proposed work has considered the months from November to May for experimentation.
High-resolution Google Earth images are used here for validation.
Various land classes using these images are marked as ground truth by visual observation.
They are further registered with Landsat 8 images using geo-reference for accuracy computation.
A contrast-enhanced false-color image composed of near infra-red (NIR), red, and green bands of the study area is shown in Fig. <ref> (C).
Prominent river sandbanks can be visible on the Damodar river at the white bounding box of Fig. <ref> (C).
The scattered darker region above the white bounding box which forms a sickle-like shape is the JCF.
§ RESULTS AND DISCUSSION
Landsat 8 images of different months in 2017 are used here for generating the results and accuracy computation.
As multispectral images can not trespass clouds, months with heavy rains and >10% cloud covers in the land are not considered here.
A consolidated sample outcome of different stages of the proposed algorithm is shown in Fig. <ref>.
Fig. <ref> (A) shows a high-resolution Google Earth image of the region of interest.
The proposed technique is an amalgamation of two separate processes.
First, probable river regions are isolated using the morphological patterns of the detected water bodies.
Thereafter, regions with high mineral contents are detected through the CMI values.
As MNDWI has been found beneficial for open water features, which is correlated with the study area, MNDWI has been preferred here for water body detection <cit.>.
Fig. <ref> (B) shows the outcome of MNDWI over the region of interest.
Due to the prominent river sandbank, the river trails may become so narrow that it is incomprehensible in mid-resolution satellite images.
These isolated regions are scattered along the river trails.
It becomes erroneous if high mineral content regions surrounding these detected river paths are considered to detect river sandbanks.
Hence, morphological dilation is applied over the outcome of MNDWI.
Mine swamps and rivers both have high mineral content in the surroundings.
Thus, image thinning is performed over the dilated outcome to study the shapes of different water bodies (Fig. <ref> (C)).
Zoomed versions of MNDWI and thinned output are also shown in Fig. <ref>.
All the water bodies are treated separately using connected component analysis.
The shape of a thinned river path is found to be larger than the shapes of smaller water bodies, lakes, and mine swamps.
Hence, larger thinned areas are considered as probable river regions (Fig. <ref> (D)).
Thinned structures of large lakes, dams, and reservoirs can be large enough to get falsely detected as a probable river by this approach.
However, they rarely have an abundance of mineral-enriched sand in their surroundings for sand mining.
Therefore, it is observed that such misclassified regions are isolated in the latter part of the proposed technique.
CMI values of the region of interest are shown in Fig. <ref> (E).
As discussed in <cit.>, river sandbank regions are observed along with different land classes, such as coal overburden dump regions at a certain stage of the hierarchical clustering tree.
Additionally, <cit.> has noticed a similar spectral pattern of river sandbank and coal overburden dump regions while detecting coal overburden dump.
<cit.> detects surface coal mine regions over the season where CMI values are <[0-0.02].
It has been observed that the range of CMI value in river sandbank regions is similar to a few land classes such as coal overburden dump regions as shown in Fig. <ref> (F).
However, these regions may or may not have water bodies in surrounding regions.
It can be observed from Fig. <ref> (C) and (F) that there are few water body regions whose surroundings have high mineral content and similar CMI values.
The distinguishing factor between these regions and prominent river sandbanks is a river trail.
All the detected regions of Fig. <ref> (F) are treated individually using connected component analysis.
A bounding box padded with five pixels on each side over each connected component is considered to check whether it contains a probable river region.
The final outcome of the proposed technique is shown in Fig. <ref> (G).
As observed from Fig. <ref>, the proposed technique can detect river sandbank regions with high mineral abundance.
Fig. <ref> (H) shows the outcome with Sentinel 2A images which have 20m resolution.
It is observed that the river sandbank regions can also be detected from Sentinel 2A images.
There are a few regions in the proximity of a river, which have high mineral quantity and are not river sandbanks.
The proposed technique can falsely detect them due to the use of a bounding box with five pixels padding (Fig <ref> (G), (H)).
It can be addressed with smaller padding in the future.
Soil has several lithological sub-classes.
In typical cases, the portions of minerals available in sands are higher than different lithological classes present near the rivers such as clay, silt, etc.
CMI preserves the high mineral quantity regions.
A few works study the spectral reflectance properties of different soil classes.
As an example, spectral reflectances of different soil and soil mixtures in different spectrum are computed to check soil fertility in <cit.>.
SWIR-I and SWIR-II have wavelengths of 1.57-1.67μ m and 2.11-2.29μ m, respectively.
It can be observed from <cit.> that the difference in reflectance values between SWIR-I and SWIR-II bands for different soil and soil mixture, are higher than sandy soils.
Furthermore, it has been observed that near infra-red regions (0.7 - 2.5μ m) can differentiate between physical and chemical properties of soils <cit.>.
Hence, CMI is likely to separate high mineral content sand regions from other lithological classes of soil found near rivers.
More experimentation in such directions is treated as a future work.
The proposed algorithm uses different thresholds values in different phases of its process flow.
Fig. <ref> shows the outcome of the proposed algorithm with varying threshold values at different phases.
Fig. <ref> (A) shows the outcome of the proposed algorithm in December 2017.
The probable river regions are detected after morphological operations and connected component analysis.
A typical river has a longer stretch than other water bodies.
Thus, connected components with higher regions are selected as the probable river regions because of their longer lengths.
However, if connected components with smaller regions are also preserved, water bodies with smaller length can be falsely get detected as probable rivers.
As such connected components preserve water bodies with smaller regions, different regions of R_m are falsely detected as river sandbank regions.
Hence, the number of false positive river sandbank regions is increased as shown in Fig. <ref> (B).
Higher values of MNDWI preserve water bodies.
Lower threshold values of MNDWI detects different regions, which are not water bodies.
These falsely detected regions close to a water body, may provide a large connected component.
This directly affects the proposed algorithm as shown in Fig. <ref> (C).
Furthermore, if higher threshold values of MNDWI is considered, it could not detect different portions of a river with river sandbank.
Hence, a river appears as isolated segments of trails rather than a continuous one.
The connected component discards many such isolated segments.
In such scenarios, different true river sandbank regions could not get detected as shown in Fig. <ref> (D).
Connected component analysis is also applied over the CMI detected regions.
These regions are detected using two threshold values τ_1 and τ_2.
Here, τ_1 is chosen empirically within [0.08-0.11] and τ_2 is chosen as 0.
As it has been observed that CMI<0 has higher probability of detecting mining regions <cit.>, τ_2 is chosen as 0.
The value of τ_1 is chosen empirically by observing the distribution of river sandbank regions over the season.
If τ_1>0.11 is selected, other regions are also preserved.
These regions may get connected with each other and form a larger connected component region.
If τ_2<0 is selected various other regions with high mineral assemblages are detected and form a larger connected component.
Such larger connected components have bigger bounding boxes, which cover substantial regions.
These bounding boxes may have a probable river as detected by the proposed algorithm, where the river is not in the close proximity of the high mineral content region.
Hence, as shown in Fig. <ref> (E) and (F), different false positive regions are also detected.
§.§ Validation
Different land classes are marked using visual interpretation from high-resolution Google Earth images.
These images are used for validation and accuracy computation.
Primary land classes of the study area, which are considered, are vegetation, water body, bare land, land classes related to surface coal mining, urban regions, and river sandbank.
These regions are marked and extracted from Landsat 8 and Google earth images using open source QGIS software.
Water bodies, which have nearby high mineral content regions are considered.
Mean reflectance values of these ground truth regions over the different spectrum of Landsat 8 are shown in Fig. <ref>.
100 random samples of these ground truth images are analyzed using a t-test with the null hypothesis of μ_RiverSandbank = μ_OtherLandClass.
Here, μ_OtherLandClass, and μ_RiverSandbank are defined as the mean of CMI values of different other ground truth land classes and river sandbank regions, respectively.
Vegetation, water bodies, urban regions, coal mining regions, and coal overburden dump regions are considered as other land classes.
S_RiverSandbank≠ S_OtherLandClass is considered, where S is defined as the variance.
The alternative hypothesis is defined as μ_RiverSandbank≠μ_OtherLandClass.
The t statistics, degree of freedom, and corresponding p values over the seasons are shown in Table <ref>.
As observed in Table <ref>, the null hypothesis can be rejected for all the land classes over the season except coal overburden regions.
Hence, CMI can differentiate river sandbank regions from other regions except coal overburden dump over the season.
Similar observations can be found in Fig. <ref>.
The horizontal, and vertical axes of Fig. <ref> represent band numbers of Landsat 8, and reflectance values, respectively.
It can be observed that coal overburden dumps have similar spectral characteristics with river sandbank regions (Fig. <ref>).
They show near similar values in the Green band (band 3).
However, coal overburden dumps provide slightly lower values in NIR (band 5), SWIR-I (band 6), and SWIR-II (band 7) bands than river sandbank (Fig. <ref>).
The difference in the NIR band is higher than the other two bands (Fig. <ref>).
It corroborate with the findings in <cit.>.
Therefore, as observed in Fig. <ref>, though water indexes can separate coal overburden dump from river sandbank regions, but other isolated land classes get detected along with river sandbank regions.
Hence, in this paper, the presence of rivers in the nearby regions is preferred rather than using water indexes exclusively.
Most of the roads found in this region are insignificant as per the spatial resolution of Landsat 8.
The proposed technique may be found erroneous when there are prominent roads and MNDWI fails to separate them from water bodies.
Thus, the spectral responses of roads are computed (Fig. <ref>).
It has been observed that water bodies have significantly lower SWIR-I values than NIR, whereas roads may not follow this trend.
This characteristic is also observed in <cit.> while detecting roads.
Normalized difference moisture index (NDMI) is defined as λ_NIR-λ_SWIR-I/λ_NIR+λ_SWIR-I.
Next, the mean NDMI value (μ_NDMI) of each connected component is checked.
If μ_NDMI≯ξ, that connected component is discarded.
Here, ξ is chosen empirically as 0.15 as shown in Fig. <ref>.
Fig. <ref> shows outcome of NDMI with different threshold values at the region of interest.
As shown in Fig. <ref> (C), all the water bodies are primarily detected when the threshold value is 0.2.
However, a few portions of the river network remain undetected.
When the threshold values is chosen as 0.1, various other regions including a few of the road networks are also detected as shown in Fig. <ref> (A).
Though different other small regions are detected along with the water bodies, most of the river network is detected when threshold value is chosen as 0.15 (Fig. <ref> (B)).
Thus, here, ξ is chosen empirically as 0.15.
Thorough experimentation regarding this is treated as a future work.
The separation of water bodies and road network can be improved by any robust technique to detect roads from satellite images.
Different supervised machine learning based techniques can be a probable solution.
§.§ Performance Analysis
These ground truth regions are utilised for the accuracy computation.
The proposed technique has average precision, recall, F_1 score, and accuracy of 85.47%, 73.5%, 78.91%, and 90.75%, respectively over the seasons as shown in Table <ref>.
Fig. <ref> shows an outcome of the proposed technique in a smaller region.
Fig. <ref> (A), and (B) show the google earth image and the outcome of the proposed method, respectively.
Fig. <ref> (C) shows the regions which are falsely detected as river sandbank regions.
It can be observed from Fig. <ref> that the proposed technique can detect the river sandbank regions.
The increased water content in rainy seasons directly affects the river sandbank regions as shown in Fig. <ref> (D-F).
Fig. <ref> (D), (E), and (F) show the outcomes of the proposed technique before, just after, and after rainy seasons, respectively, in 2021.
Due to the high water content of the river, it can be observed from Fig. <ref> (D-F) that the shape of the river sandbank regions has been changed and the proposed technique can detect them.
As observed in Fig. <ref>, and Table <ref>, the proposed technique can identify the river sandbank regions with good precision.
The segments of outcomes in Sentinel 2A, and Landsat 8 data are shown in Fig. <ref>(G), and (H), respectively.
The proposed technique shows higher precision of 88.79%, and higher recall of 78.68% by using Sentinel 2A.
As the boundary of the detected regions is smoother in Sentinel images (Fig. <ref> (G)), the accuracy of the proposed technique may improve if finer spatial resolution is used.
It can be observed from Table <ref> that the proposed technique has comparatively low accuracy and F_1 score in March and May than other months.
CMI has been used here to detect high mineral content regions.
In has been observed that CMI shows comparatively lower accuracy of in summer seasons than other seasons <cit.>.
In this work, CMI and MNDWI have been used.
Both of them use short wave infra-red bands, which have also been used to compute moisture content in the past.
As summer months are dry months, this may have affected the performance due to the variance in soil moisture <cit.>.
Thus, it may have an adverse affect on the accuracy of the proposed algorithm in summer.
Further experimentation in these regards is treated as a future work.
The proposed technique has been applied in different other regions.
Fig. <ref> shows the outcome of the proposed technique over the Bramhaputra river close to Guwahati, Assam, India.
A false color representation of the region of interest, detected water bodies, and detected river sandbank regions are shown in Fig. <ref> (A), (B), and (C), respectively.
The Bramhaputra river shows significant sediment discharge and has average width of 5.46km <cit.>.
The regions shows tropical monsoon rainforest climate.
It can be observed from Fig. <ref> that the proposed technique can detect multiple, complex and variable length river sandbank regions in a large river system.
The proposed technique has been also tested over the Damodar and Dwarakeswar river near to to Burdwan, West Bengal, India as shown in Fig. <ref>.
Damodar and Dwarakeswar basins are widely used for sand mining.
A false color representation of the region of interest, detected water bodies, and detected river sandbank regions are shown in Fig. <ref> (A), (B), and (C), respectively.
It can be seen from Fig. <ref> that the proposed technique can detect river sand bank regions on both the rivers.
Therefore, the proposed technique can detect river sandbank regions on different rivers together.
However, a few regions are falsely detected by the proposed technique in the Dwarakeswar basin.
As Dwarakeswar river has a narrow width and shows high sinuosity, a few nearby regions may get falsely detected due the bounding box computed over each connected components.
More investigation with such a narrow river is considered as a future work.
A multi-modal analysis composed of multi-spectral and InSAR images is employed to detect ex-sand mining area <cit.>.
Integration of differential global positioning systems and UAV data is used to quantify sand quarry volumes <cit.>.
As InSAR and DEM data require multiple stages of pre-processing and are not acquired in a regular interval, in most cases,
Landsat 8 has been used here to detect river sandbank regions which are highly dynamic and need to be observed in regular intervals.
Additionally, different multimodal techniques require the availability of such satellite commodities, which is difficult for several regions.
In semi-supervised or supervised techniques, labeled dataset is required, which may be found challenging for such smaller land classes.
As an example, deep learning based techniques are employed in <cit.> to detect mining and tailing dams.
In <cit.>, different land classes in a mining regions are detected using supervised support vector machine.
Such techniques have the prerequisites of a labeled dataset for training, which may be found difficult for a finer land class.
Moreover, such techniques do not emphasize the distinguishing spectral characteristics of a land class, viz. river sandbanks.
The proposed method provides a novel technique to detect river sandbank
regions in the presence of other regions containing ample amounts of minerals using multi-spectral images without any labeled dataset.
Although some of the river sandbanks can be found visually separable, the abundance of minerals is difficult to comprehend using Red-Green-Blue bands, which the proposed technique addresses.
Landsat 8 provides mid-resolution satellite images with a spatial resolution of 30m.
It is difficult to detect those river sandbank regions, which are smaller than the spatial resolution of Landsat 8 satellite images.
However, Landsat images have been used in the literature to detect river sandbank regions.
As an example, in <cit.>, deformation of ex-mining areas of sand is detected using Landsat 5 images (spatial resolution of 30m) and DEM data derived from phased array type L-band synthetic aperture radar (PALSAR) of advanced land observing satellite (ALOS).
Green and Red channels of Landsat 8 images are used to quantify suspended
sediment concentration in Red River <cit.>.
The proposed technique has also been found applicable to Sentinel 2A images.
Detection of sandy regions, which have ample amounts of minerals and are suitable for sand mining, is challenging.
Landsat 8 provides various bands, which are susceptible to high mineral-assemblages.
Hence, in this work, Landsat 8 images are considered.
River sandbank regions are finer land classes and hence, they are difficult to detect.
Generating manually labeled dataset of such a finer land class is laborious and error-prone.
These land classes are dynamic and their spectral trends can be affected by the season.
However, the proposed technique provides a novel technique to detect river sandbank regions over the seasons without labeled dataset.
Thus, the proposed technique can be have wider applications irrespective of season and available labeled dataset.
A time-series analysis using the proposed technique can be used to quantify the changes of river sandbank and their probable effect on the river morphology.
The proposed technique can be found useful for identification of the illegal sand mining, which has drastic measures on river health.
The inclusion of different characteristics of river trails, such as sinuosity, and DEM can boost the performance of river trail detection, which is treated as one of the future directions of this work.
Here, results are generated using TOA reflectance from L1 data.
CMI and MNDWI, both are found to be robust to surface reflectance <cit.>.
Further experimentation with L2 data and other climatic regions is considered as a future reference.
§ CONCLUSION
River sandbank is one of the primary sources of sand mining and has near similar characteristics to
subclasses of bare soil regions, which makes it difficult to separate.
The proposed work presents a novel technique to detect river sandbank regions for sand mining using the presence of a high quantity of minerals and association to rivers, without any labeled dataset, and using multi-spectral images exclusively.
Initially, probable river regions are detected.
Next, high mineral content regions are considered.
Regions, which are close to the proximity of rivers and have a high abundance of minerals are detected as river sandbank regions.
It may become misleading with the presence of large water bodies with high mineral contents, which is treated as a future direction of the work.
However, these regions have less amount of minerals in contrast to rivers.
Therefore, the final outcome gets less affected by these erroneous regions.
It provides average accuracy, precision, and recall of 90.75%, 85.47%, and 73.5%, respectively over the seasons.
As the proposed technique is dependent on the accuracy of CMI and MNDWI, it can be further improved by enhancing their performances, which is considered a future work.
The proposed method has significant potentials in different future applications, including, the detection, classification, and monitoring of river sandbank and river morphology, monitoring of riverbed stability, early prediction of river course change, and others.
It has also several social and industrial applications in the mining industry, resource management, and detection of illegal sand mining.
spiejour
First Author is currently an assistant professor at the Computer Science and engineering department of Birla Institute of Technology, Mesra, India. He received his B. Tech degree from West Bengal University of Technology in Computer Science and Engineering. He received his MS degrees in School of Information Technology from the Indian Institute of Technology Kharagpur, India in 2014. He concluded his doctoral study in remote sensing from the Indian Institute of Technology Kharagpur in 2020. He has authored more than fifteen journal and conference papers. His current research interests include muti-spectral imaging, image processing, and machine learning.
Biographies and photographs of the other authors are not available.
|
http://arxiv.org/abs/2307.00446v1
|
20230702004820
|
Spin-selective magneto-conductivity in WSe$_2$
|
[
"En-Min Shih",
"Qianhui Shi",
"Daniel Rhodes",
"Bumho Kim",
"Kenji Watanabe",
"Takashi Taniguchi",
"Kun Yang",
"James Hone",
"Cory R. Dean"
] |
cond-mat.mes-hall
|
[
"cond-mat.mes-hall"
] |
^1Department of Physics, Columbia University, New York, NY, USA
^2Department of Mechanical Engineering, Columbia University, New York, NY, USA
^3Research Center for Electronic and Optical Materials, National Institute for Materials Science, 1-1 Namiki, Tsukuba 305-0044, Japan
^4Research Center for Materials Nanoarchitectonics, National Institute for Materials Science, 1-1 Namiki, Tsukuba 305-0044, Japan
^5Department of Physics and National High Magnetic Field Laboratory, Florida State University, Tallahassee, Florida 32306, USA
^αPresent address: Physical Measurement Laboratory, National Institute of Standards and Technology, Gaithersburg, MD 20899, USA
^βPresent address: Department of Chemistry and Biochemistry, University of Maryland, College Park, MD 10 20742, USA
^γPresent address: Department of Physics and Astronomy, University of California, Los Angeles, CA, USA
Spin-selective magneto-conductivity in WSe_2
Cory R. Dean^1
August 1, 2023
============================================
Material systems that exhibit tunable spin-selective conductivity are key components of spintronic technologies. Here we demonstrate a novel type of spin-selective transport, based on the unusual Landau level (LL) sequence observed in bilayer 2 under large applied magnetic fields. We find that the conductivity depends strongly on the relative iso-spin ordering between conducting electrons in a partially filled LL and the localized electrons of lower energy filled LLs, with conductivity observed to be almost completely suppressed when the spin-ratio and field-tuned Coulomb energy exceed a critical threshold. Switching between “on/off” states is achievable through either modulation of the external magnetic or electric fields, with many-body interaction driving a collective switching mechanism. In contrast to magnetoresistive heterostructures, this system achieves electrically tunable spin filtering within a single material, driven by interaction between free and localized spins residing in energy-separated spin/valley polarized bands. Similar spin-selective conductivity may be realizable in multi-flat band systems at zero magnetic field.
Spin-dependent transport effects, such as giant magnetoresistance (GMR), play a fundamental role in spintronics. In GMR structures<cit.>, electrons in a non-magnetic conduction layer scatter off moments in the surrounding magnetic layers due to exchange interaction. The scattering rate is typically enhanced when the conducting electron and the moment have different spins, so that a change in the relative polarization under applied fields causes a large change in resistance<cit.>. This effect is exploited in functional devices such as spin valves and magnetic field sensors, and plays an important role in current data storage technologies <cit.>.
Evidence of a homologous effect has been observed in the quantum Hall regime of certain large-mass two-dimensional quantum wells, but remains far less explored.
In both AlAs and ZnO, a combination of perpendicular and parallel magnetic fields can induce a spin splitting that exceeds the cyclotron gap, E_z/E_cylc.>1. This results in spin-polarized bands, with multiple lower-energy, filled, Landau levels (LLs) polarized with one spin and the LL at the Fermi energy carrying a different spin. It appears that the conducting electrons interact with the localized spins of the filled states such that the longitudinal conductivity is diminished (enhanced) when the Fermi energy is in a spin minority (majority) LL<cit.>. However, the observed contrast between spin-minority and majority conductivity has been comparatively modest and the precise mechanism has remained unresolved. Nonetheless, these results hint at the possibility of realizing a novel kind of spin-selective transport within a single material with energy-separated spin-polarized bands, rather than a heterostructure with spatially separated magnetic domains.
Here we investigate hole-transport in high mobility bilayer 2 in the quantum Hall regime. 2 is a van der Waals semiconductor that, like graphene, can be exfoliated to the atomically thin limit.
Unlike graphene, 2 possesses an exceptionally large spin suceptibility<cit.> that leads to an atypical quantum Hall scenario where the Zeeman energy exceeds the cyclotron energy under purely perpendicular fields<cit.>.
Together with spin-valley locking, this gives rise again to a highly polarized LL sequence, where a large number of the lowest energy bands all share the same iso-spin polarization <cit.> (Fig. 1a, b). We find that 2 exhibits a similar spin-dependent conductance as in polarized AlAs and ZnO, but with a much more dramatic effect. The conductance within partially filled majority- and minority-spin LLs shows opposite temperature dependence, with the minority-spin LL conductance approaching zero at low temperature and high magnetic field. This observation identifies a regime of essentially perfect spin selectivity arising from a complete localization behaviour for the minority carriers.
We suggest that the spin dependent localization may be understood from a polaron-like model, in which a minority spin carrier displaces the background incompressible majority spin carriers and becomes self-localized. By studying this effect in blayer WSe2, we show that the spin-dependent transport can be generalized to real spin or valley pseudospin. Finally, we demonstrate that switching between the high/low conductance states can be initiated within a single, partially filled, LL when Coulomb interaction drives a collective spin flip. These results provide a prototypical demonstration of electric-field tunable spin-dependent transport in an all-electron system, where Coulomb interactions and exchange effects play central roles.
We measured magneto-transport in dual-gated bilayer 2 fabricated with pre-patterned Pt contact elelctrodes<cit.> (Fig. 1c).
The bilayer 2 was exfoliated from a flux-grown single crystal, with charged defect density lower than 10^11 cm^-2<cit.>.
Using the van der Waals stacking approach<cit.> together with standard nanofabrication technqiues, we implemented a novel device geometry that supports measurements of both edge and bulk transport in the same device (Fig. 1d). Six contacts arranged in a circular pattern enable four-terminal longitudinal (R_xx) and Hall resistance (R_xy) measurements in a“van der Pauw geometry”(Fig. 1d bottom). Additionally, a center contact enables measurement of the two-terminal conductance between the center and edge contacts probes the bulk response in the Corbino geometry (Fig. 1d top). Dual gates provide independent control of the carrier density of the two layers <cit.>, allowing us to study the device response with either a single layer or both layers populated. We restrict our measurement here to the hole band response only.
We first focus on the regime with only one layer populated, where the device effectively behaves like an isolated monolayer. Figure 1e shows R_xx and R_xy, measured in the van der Pauw geometry as a function of applied perpendicular field B, with hole density tuned to n_hole=3.64×10^12 cm^-2, and transverse displacement field, D=1.14 V/nm. The Hall mobility, measured in the low field regime, is μ_H≈ 30,000 cm^2/Vs, which is the highest reported for any semiconductor TMD<cit.>.
Quantum oscillations appear below 2 T, giving an approximate estimate for the Landau level broadening of 5.92 K, which is less than has been reported for high mobility graphene<cit.>. In the high field regime, fully quantized Hall plateaus concomitant with zero longitudinal resistance are observed at integer fillings, further confirming the high quality of the device.
The most dramatic feature seen in the magnetoresistance (Fig. 1e) is a an alternating high/low longitudinal resistivity between neighbouring LLs. Previous studies of 2 established the LL spin splitting to be density- and field-dependent, giving rise to the approximate ladder of states shown in Fig. 1a<cit.>. At magnetic fields on the order 15 T, the lowest 6 LLs are fully iso-spin polarized, with higher fillings corresponding to alternating majority/minority iso-spin. We find that the resistance modulation correlates with the relative spin order at the Fermi energy with high (low) R_xx coinciding with the spin majority (minority) LLs. For example, the peak resistance at ν=6.8, which corresponds to 6 filled polarized levels and one partial filled anti-poplarized (Fig. 1b), has resistance less than 1/5 the value at partial filling of neighbouring, spin-majority, LLs. The general influence of the spin state on transport is unambiguously demonstrated in the plot of R_xx versus field and density shown in Fig. 1f. The R_xx value periodically switches between high value (red) and low value (blue), matching the spin transition map shown in Fig. 1g.
In the high-field, low-density regime, the longitudinal conductivity, σ_xx, is approximately proportional to the longitidunal resistivity, ρ_xx, such that low resistance indicates low conductance. The Corbino geometry is used to probe the bulk conductance directly, i.e. independent of any potential edge effects. Figure 2a shows the two-terminal conductance in the Corbino geometry, acquired at B = 15T and varying temperature.
At T = 0.8K (black line), majority-spin LLs exhibit high conductance and minority spins exhibit low conductance, consistent with the van der Pauw resistivity measurements. The temperature dependence likewise shows two behaviours (Fig. 2b). For majority-spin fillings, the conductance exhibits metallic-like response, i.e. increases with decreasing temperature, as is typical for normal QH systems at partial filling. For the minority spins, the conductance decreases with decreasing temperature, indicating an insulating-like bulk response consistent with localization of the carriers.
The localization strength varies with both total filling fraction and magnetic field. In the Corbino measurement at B=15 T (Fig. 2a), the spin-minority conductance is minimal at filling fraction 6.5, corresponding to a single anti-polarized LL in a background of fullly polarized LLs, and then increases monotonically with increasing LL index. Figure 2c shows the effect of varying the magnetic field, measured in the van der Pauw geometry. As the magnetic field is increased from 24 T, filling fraction at 5.5 undergoes a transition from spin majority to spin minority, evidenced by transition from large (dashed black line) to small (solid black line). By around 28 T, completely vanishes across the full LL, within measurement resolution. Simultaneously, the ν=5 Hall plateau extends across this same filling range. This remarkable observation suggests that the spin-minority LL is fully localized within the entire LL.
Both the filling fraction and magnetic field dependence of the minority carrier localization can be understood from the spin-dependent exchange interaction between the mobile and localized carriers. Mobile carriers interact weakly (strongly) with localized carriers that share the same (opposite) iso-spin polarization. The effect on transport is illustrated schematically in Fig. 2d, e. Mobile carriers in a partially-filled spin-majority LL scatter only weakly from the localized carriers (Fig. 2d). For spin-minority carriers, the strong Coulomb interaction can distort the immobile background charges, such that the free carrier becomes effectively more massive and less conductive (Fig. 2e).
This is analogous to a polaron, but in an all electron system with the immobile carriers in the filled LLs playing the equivalent role of a crystal lattice. Since in the QHE regime the Coulomb energy scales with the magnetic field, E_c=e^2/ϵ l_B∝√(B_⊥), the minority-carrier should become increasingly localized with increasing field, consistent with our observation. On the other hand, as filling fraction is increased at fixed magnetic field, the net polarization of the carriers reduces towards zero, reducing the difference between majority and minority spin mobilities, also consistent with observation. The picture identified here may also explain why the observed effect is magnified in 2 compared with previous studies. The influence of the filled LLs can be characterized by the LL mixing parameter<cit.>, κ=E_Coulomb/E_cycl., i.e. the ratio of the Coulomb energy, to the cyclotron gap. For comparison we note that κ=α/√(B), with α=2.6, 16.7, 22.5, 31 in GaAs, ZnO, AlAs and WSe2, respectively. The unusually large κ, together with the large ratio of E_z/E_cycl. creates a unique scenario in 2 with simultaneously larger polarization and larger mixing than in previously studied systems.
The screening capability depends on the partial filling factor in this strongly-interacting system.
The polarizability Π of carriers in the filled LLs peaks near a momentum q≈1/l, where l is the magnetic length<cit.>.
Therefore, the interaction between minority carriers and the backround majority carriers is optimally screened when the minority-spin charges are localized at a length scale l. As the filling of minority spin carriers increases, their wavefunctions overlap and the potential from them becomes more uniform and no longer localizes around l. Therefore, to minimize the interaction energy, the minority-spin carriers further localize themselves by mixing with higher LLs.
The LL mixing in turn pushes the extended states of the valence LL higher in energy <cit.>; this process stops only when the majority spin electrons start to enter the valence LL. This phenomenon can explain our observation in Fig.2(c) that conduction due to extended states is missing in a large range of filling factor 5 < ν <6, with remaining at zero and at the plateau value.[I don't really understand this text]
Both the real spin and the valley pseudospin determine the exchange interaction. In bilayer 2, the strong spin-orbit coupling and the stacking order result in coupled spin, valley and layer degrees of freedom. A total of four flavors are relevant: (↑, K), (↓, K') localized in the top layer, and (↑,K'), (↓,K) localized in the bottom layer <cit.>.
Therefore, the valley degree of freedom can be controlled by a displacement field, D, which moves carriers from one layer to another.
In Fig. 3a we plot measured at 15 T versus D=(V_T-V_B)/2ϵ_0 and the total filling factor ν_tot=hC(V_T+V_B)/e^2B, where V_T and V_B are top gate and bottom gate voltage bias, and C is the geometric capacitance.
The resistance shows a checkerboard-like pattern as charge is transferred between different LLs localized in the two layers <cit.>.
For ν < 6, all LLs have the same spin, but charges in the top layer reside in the K valley and those in the bottom layer are in the K' valley.
Examples of the change in LL configuration versus D for fixed filling fraction, ν=5.5, are shown in Fig. 3c-e. At large negative D, the free and localized electrons share the same spin and valley. As |D| is decreased from the fully valley-polarized regime, the Fermi level switches between valley K and K'.
Figure 3f shows a plot of versus D at ν = 5.5, corresponding to a linecut along the dashed line in Fig. 3a. oscillates as the Fermi level switches between majority (large ) and minority (small ) valley spin. The resistance is approximately symmetric about D=0, confirming that the value of is dominated by the relative net iso-spin order, rather than other features such as the spin or valley flavour. To quantify how varies with the relative iso-spins we introduce the parameter,
p=N_F/N_tot, where N_tot is the total number of filled LLs, and N_F is the number of filled LLs that have the same flavor as the LL at the Fermi energy. We label the peaks and minima in Fig. 3f by these p values and see that indeed is maximized (minimized) when p is large (small).
Figure 3g plots the evolution of the normalized longitudinal resistance R_xx/R_xx(p=1) versus p for three filling factors 3.5, 4.5 and 5.5. R_xx/R_xx(p=1) shows an approximately linear increase with p confirming the longitudinal resistance is proportional to the net population difference between the free carrier flavor and the localized carrier flavors. In Fig. 3b, we colour distinct regions in the D -ν phase space by the magnitude of p.
For ν > 6, p is calculated as the population of the valence flavor among all four flavors.
The consistency of the polarization color map with the color map in Fig. 3a again indicates correlation between p and the resistance.
These observations are consistent with our picture that localization of the mobile carriers at the Fermi energy is driven by interaction with incompressible carriers of a different flavor: the greater the population of different-flavored incompressible carriers, the lower the conductance.
Finally, we discuss the switching between majority/minority flavors within a single partially filled LL. Figure 4a plots the conductance versus magnetic field and filling factor around an iso-spin transition due to a LL crossing. As before, high (low) conductance correlates with iso-spin majority (minority) free carriers. In detail we see that the conductance transition from high to low value onsets at partial fillings across a well defined and narrow boundary. This suggests a collective transition that reorders the spin at the Fermi level (Fig. 4b)<cit.>. Figure 4c plots the conductance at B = 13.4 T versus the filling factor (white dashed-line in Fig. 4a) and temperature. The temperature dependence of the conductance at two filling factors (ν = 6.55 and 6.77) are compared in Fig. 4d. The temperature dependence is effectively identical down to approximately 5 K, below which the trends suddenly diverges with ν = 6.55 dropping sharply towards zero (consistent with minority carrier localization) while ν = 6.77 shows metallic like response. The sudden change in beahviour below an apparent critical temperature further supports the notion of a many-body driven spin-transition when the temperature drops below a characteristic interaction energy scale.
The collective flip of many spins with increasing filling factor is evocative of LL levitation driven by Coulomb interactions<cit.>. With increasing partial filling factor in a minority-spin LL, the strong Coulomb interaction between the minority-spin carriers at the Fermi level and the majority-spin carriers in the background (and the urge to optimally screen this interactions) effectively pushes the valence LL up. This LL levitation with increasing partial filling explains the missing extended-states at high magnetic field (Fig. 2c).
Finally, when the filling factor is tuned by sweeping the magnetic field at a fixed density, a large conductance spike is observed at the transition. This typically suggests the formation of domain walls <cit.> between the two different spin states indicating a first-order transition (Fig. 4e).
In summary, we have shown strong spin-dependent transport in bilayer WSe_2. We demonstrate that spin/valley-polarized band arrangement in energy space can give strong spin/valley-selective transport, achieving perfect selectivity in the extreme limit. The ability to engineer polarized flat bands in van der Waals heterstructures, such as through moire patterning, might enable a similar electric-field driven iso-spin selectivity under zero magnetic fields. This type of bandstructure driven GMR could pave the way for next-generation spintronic devices.
We thank Luis Balicas, William Coniglio and Bobby Pullum for help with experiments at the National High Magnetic Field Lab. This research is primarily supported by US Department of Energy (DE-SC0016703). Synthesis of 2 (D.R.,B.K.,K.B.) was supported by the Columbia University Materials Science and Engineering Research Center (MRSEC), through NSF grants DMR-1420634 and DMR-2011738. The work of KY was supported by the National Science Foundation Grant No. DMR-1932796. A portion of this work was performed at the National High Magnetic Field Laboratory, which is supported by National Science Foundation Cooperative Agreement No. DMR-1644779 and the State of Florida. K.W. and T.T. acknowledge support from the JSPS KAKENHI (Grant Numbers 20H00354, 21H05233 and 23H02052) and World Premier International Research Center Initiative (WPI), MEXT, Japan.
§ SUPPLEMENTARY MATERIALS
Spin-selective magneto-conductivity in WSe_2
E. Shih, Q. Shi, D. Rhodes, B. Kim, K. Watanabe, T. Taniguchi, K. Yang, J. Hone, C. R. Dean
This PDF file includes:
Supplementary Text
Materials and Methods
§ S1. DEVICE FABRICATION AND GEOMETRY
The cross-section of bilayer WSe_2 device described in the main text is shown in Fig. S1a. The full stack is made of two pieces: the bottom stack (Fig. S1a blue dashed block and photo in Fig. S1c) is made of BN encapsulated graphite followed by deposition of prepatterned contact Cr/Pt (2nm/20nm), and the top stack (Fig. S1a red dashed block and photo in Fig. S1b) is made by picking layers in order of BN/Graphite/BN/2L WSe_2. The top stack is then put on the bottom stack with 2L WSe_2 contacted with the Pt (Fig. S1d). Next, the partial etching step shapes the top-gate which defined the channel area (Fig. S1e). Then, another BN is put on top of stack to avoid shorting the top-gate to the later evaporated leads (Fig. S1f). In this design, leads can be placed across the channel and make Corbino geometry possible. Finally, holes are etched to the Pt prepattern (Fig. S1g) followed by deposition of channel leads and contact-gate (Fig. S1h). The device is designed so that the contact-gate can turn on the contact region (with large negative voltage), and top/bottom gate voltage can be positive or negative; therefore the full density and displacement field phase space can be mapped out. In device a (Fig. S1h), because the contact gate short all the outer leads together, we have to use negative topgate voltage to turn on contact region, which leaves top-layer always on. In device b (Fig. S2), the contact channel doesn't short the leads; therefore, we can map out the full phase space as shown in main text Fig. 3.
§ S2. LANDAU LEVEL STRUCTURE IN BILAYER WSE_2
The four iso-spin flavors of bilayer 2 are protected by spin-orbit coupling and inversion symmetry of the crystal (more discussion see SI of Ref. <cit.>). Under magnetic field, each flavor band split into Landau levels and their relative ordering depends on Zeeman energy (E_z) and electric field energy across the layers (E_L), as schematically shown in Fig. S3.
§ S3. COMPARING DIFFERENT 2D CHARGE CARRIER SYSTEMS
§ S4. LANDAU LEVEL LEVITATION
The screening capability depends on the partial filling factor in this strongly-interacting system.
The polarizability Π of carriers in the filled LLs peaks near a momentum q≈1/l, where l is the magnetic length<cit.>.
Therefore, the interaction between minority carriers and the backround majority carriers is optimally screened when the minority-spin charges are localized at a length scale l. As the filling of minority spin carriers increases, their wavefunctions overlap and the potential from them becomes more uniform and no longer localizes around l. Therefore, to minimize the interaction energy, the minority-spin carriers further localize themselves by mixing with higher LLs.
The LL mixing in turn pushes the extended states of the valence LL higher in energy <cit.>; this process stops only when the majority spin electrons start to enter the valence LL. This phenomenon can explain our observation in main text Fig.2(c) that conduction due to extended states is missing in a large range of filling factor 5 < ν <6, with remaining at zero and at the plateau value.
§ S5. ELECTRICAL CHARACTERIZATION
|
http://arxiv.org/abs/2307.01484v1
|
20230704052506
|
Robust finite element methods and solvers for the Biot--Brinkman equations in vorticity form
|
[
"Ruben Caraballo",
"Chansophea Wathanak In",
"Alberto F. Martín",
"Ricardo Ruiz-Baier"
] |
math.NA
|
[
"math.NA",
"cs.NA",
"65N30, 65N15, 76S05, 35Q74"
] |
Monotone twist maps and Dowker-type theorems
Peter Albers[
Mathematisches Institut,
Universität Heidelberg,
69120 Heidelberg,
Germany;
[email protected]]
Serge Tabachnikov[
Department of Mathematics,
Pennsylvania State University,
University Park, PA 16802,
USA;
[email protected]]
August 1, 2023
===================================================================================================================================================================================================================================================================
In this paper, we propose a new formulation and a suitable finite element method for the steady coupling of viscous flow in deformable porous media using divergence-conforming filtration fluxes. The proposed method is based on the use of parameter-weighted spaces, which allows for a more accurate and robust analysis of the continuous and discrete problems. Furthermore, we conduct a solvability analysis of the proposed method and derive optimal error estimates in appropriate norms. These error estimates are shown to be robust in the case of large Lamé parameters and small permeability and storativity coefficients. To illustrate the effectiveness of the proposed method, we provide a few representative numerical examples, including convergence verification, poroelastic channel flow simulation, and test the robustness of block-diagonal preconditioners with respect to model parameters.
Biot–Brinkman coupled problem; deformable porous media; vorticity-based formulation; mixed finite element methods.
65N30, 65N15, 76S05, 35Q74.
§ INTRODUCTION
We address the analysis of the Biot–Brinkman equations, which serve as a model for filtration of viscous flow in deformable porous media <cit.>. The system has been recently analysed in <cit.> for the case of multiple network poroelasticity, using (,Ω)-conforming displacements and filtration fluxes (or seepage velocities) for each compartment, also designing robust preconditioners. Here we propose a reformulation for only one fluid compartment but using the vorticity field (defined as the curl of the filtration velocity) as an additional unknown in the system, and we also include the total pressure, following <cit.>. Such an approach enables us to avoid the notorious problem of locking or non-physical pressure oscillations when approximating poroelastic models and it has led to a number of developments including extensions to multiple network models, interfacial free-flow and poromechanics coupling, nonlinear interaction with species transport, reformulations into four and more fields systems, and using other discretisations such as discontinuous Galerkin, nonconforming FEM, weak Galerkin, and virtual elements. See, e.g., <cit.>.
The formulation of viscous flow equations using vorticity, velocity and pressure has been used and analysed (in terms of solvability of the continuous and discrete formulations and deriving error estimates) extensively in, e.g., <cit.>.
Methods based on vorticity formulations are useful for visualisation of rotational flows and they are convenient when dealing with rotation-based boundary conditions.
The coupling with other effects such as mass and energy transport has also been addressed, see for example <cit.>.
These contributions include fully mixed finite elements, augmented forms, spectral methods, and Galerkin least-squares stabilised types of discretisations. At the continuous level, one appealing property of some of these vorticity formulations is that the velocity is sought in (,Ω) and the vorticity is sought in either (,Ω) or ^2(Ω). Differently than in those works, in the case of the Biot–Brinkman problem, the divergence of the fluid velocity is not zero (or a prescribed fluid source), but it depends on the velocity of the solid and on the rate of change of fluid pressure. Also, an additional term of grad–div type appears in the momentum balance for the fluid.
In this paper, we prove the well-posedness of the continuous and discrete formulations for the coupling of mechanics and fluid flow in fluid-saturated deformable porous media using Banach–Nečas–Babuška theory in weighted spaces. The appropriate choice of weighting parameters yields automatically a framework for robust operator preconditioning in the Biot equations, following the approach from <cit.>. This operator scaling yields robustness with respect to the elastic parameters, storativity, Biot–Willis coefficient, and with respect to permeability. For the Brinkman component, our present formulation is such that the filtration flux terms have a different weight in their ^2(Ω) and (,Ω) contributions, which requires a different treatment for the analysis of the pore pressure terms. To address this issue it suffices to appeal to the recent theory in <cit.> (see also <cit.>), which was developed for Darcy equations using non-standard sum spaces, and we appropriately modify the scalings in the momentum equation. This modification entails the use of (discontinuous) Laplacian operators in the fluid pressure preconditioning.
Our proposed approach also offers a novel contribution to the field of operator preconditioning for the interaction of mechanics and fluid flow in fluid-saturated deformable porous media, which are challenging to solve, and the design of efficient preconditioners is highly problem-dependent <cit.>. Previous works have explored the use of block-diagonal preconditioners, Schur complements, and pressure-correction methods, which have improved the convergence rate and computational efficiency of numerical solutions for poromechanics problems <cit.>. In this work, we also derive parameter-robust solvers, but following <cit.> and also <cit.>. Our results confirm that, additionally to the Laplacian contribution needed in the Riesz preconditioner associated with the fluid pressure mentioned above, we also require off-diagonal contributions in the total pressure and fluid pressure coupling terms (as employed in <cit.>). The overall parameter scalings that we propose are motivated by the stability analysis and we verify computationally that robustness holds for this particular choice. Note that we only discuss one type of boundary conditions, but the extension to other forms can be adapted accordingly.
Research on preconditioning techniques for advanced discretizations of block multiphysics systems has also crystallized in a number of high quality open source software packages. One of the earliest efforts was the BKPIT C++ package <cit.> which, following an object-oriented approach, provides an extensible framework for the implementation of algebraic block preconditioners, such as block Jacobi or block Gauss–Seidel. More recently, as the field of physics-based and discretization tailored preconditioners has evolved with breakthrough inventions in approximate block factorization and Schur-complement methods towards ever faster and scalable iterative solvers for large-scale systems, more sophisticated block preconditioning software is available. The widely-used PETSc package offers the PCFIELDSPLIT subsystem <cit.> to design and compose complex block preconditioners. The Firedrike library extends PETSc block preconditioning capabilities and algebraic composability further <cit.>. Another tightly-related effort is the Teko Trilinos package <cit.>, which provides a high-level interface to compose block preconditioners using a functional programming style in C++. The authors in <cit.> present a generic software framework in object-oriented Fortran to build block recursive algebraic factorization preconditioners for double saddle-point systems, as those arising in MagnetoHydroDynamics (MHD).
In this paper, we build upon the high momentum gained in the last years by the Julia programming language for scientific and numerical computing. In particular, the realisation of the numerical discretization and preconditioning algorithms is conducted with the finite element software package <cit.>. We leverage the flexibility of this framework, and its composability with others in the Julia package ecosystem, such as <cit.>, to prototype natural Riesz map preconditioners in the sum spaces described above, leading to complex multiphysics coupling solvers that are robust with respect to physical parameters variations and mesh resolution. For the sake of reproducibility, the Julia software used in this paper is available publicly/openly at <cit.>.
The remainder of this article is organised as follows. The presentation of the new form of Biot–Brinkman equations and its weak formulation
are given in Section <ref>. The modification of the functional structure to include parameter weights and the unique solvability analysis for the continuous problem are addressed in Section <ref>. The definition of the finite element discretisation and the specification of the well-posedness theory for the discrete problem is carried out in Section <ref>. The error analysis (tailored for a specific family of finite elements but applicable to other combinations of discrete spaces as well) is detailed also in that section.
Numerical experiments are collected in Section <ref> and we close in Section <ref> with a brief summary and a discussion on possible extensions.
§ MODEL PROBLEM AND ITS WEAK FORMULATION
§.§ Preliminaries
Let us consider a simply connected bounded and Lipschitz domain Ω⊂ℝ^d, d ∈{2,3} occupied by a poroelastic domain with one incompressible fluid network incorporating viscosity.
The domain boundary is denoted as Γ:=∂Ω.
Throughout the text, given a normed space S, by and 𝕊 we will denote the vector and tensor extensions, S^d and S^d × d, respectively. In addition, by ^2(Ω) we will denote
the usual Lebesgue space of square integrable functions and ^m(Ω) denotes the usual Sobolev space with weak derivatives of order up to m ≥ 0 in ^2(Ω), and use the convention that ^0(Ω) = ^2(Ω).
Next, we recall the definition of the following Hilbert spaces
^1+m(Ω) := {∈^m(Ω): ∈ℍ^m(Ω)},
^m(,Ω):={∈^m(Ω): ∈^m(Ω) }, ^m(,Ω):={∈^m(Ω): ∈^m(Ω) },
and in the case that m=0 the latter two spaces are denoted (,Ω) and (,Ω), and
we use the following notation
for the typical norms associated with such spaces
_1,Ω^2:=^2_0,Ω+^2_0,Ω,
_,Ω^2:=^2_0,Ω+^2_0,Ω, _,Ω^2:=^2_0,Ω+^2_0,Ω,
respectively. Furthermore, in view of the boundary conditions on Γ (to be made precise below), we also use the following notation for relevant subspaces
^1_0(Ω) :={∈^1(Ω): = on Γ},
_0(,Ω) := {∈(,Ω): · = 0 on Γ},
_0(,Ω) := {∈(,Ω): × = on Γ},
^2_0(Ω) :={ζ∈^2(Ω): ∫_Ωζ = 0}.
For a generic functional space X and a scalar η>0, the weighted space η X refers to the same X but endowed with
the norm η·_X. In addition, we recall the definition of the norm of intersection X∩ Y and sum X+Y of Hilbert spaces X,Y
z ^2_X ∩ Y = z^2_X +z^2_Y, z ^2_X + Y = inf_[ z=x+y; x∈ X,y∈ Y ]x^2_X +y^2_Y.
§.§ The governing equations
The viscous filtration flow through the deformed porous skeleton can be described by the following form of the Biot–Brinkman equations in steady form, representing the mixture momentum, fluid momentum, and mixture mass balance, respectively
-(2μ() + [λ -α p]) = ,
- ν + κ∇ p = ,
- c_0 p - α - = g,
where is the displacement of the skeleton, () = 1/2 ( + ^ t) is the tensor of infinitesimal strains, is the filtration flux, and p is the pressure head. The parameters are the external body load , the external force applied on the fluid , the kinematic viscosity of the interstitial fluid ν, the hydraulic conductance (permeability field, here assumed a positive constant) κ, the Lamé coefficients of the solid structure λ,μ, the storativity c_0, and the Biot–Willis modulus α. Equations (<ref>) are equipped with
boundary conditions of clamped boundary for the solid phase and slip filtration velocity
= and · = 0 on Γ.
Next we introduce the rescaled filtration vorticity vector
:= √(ν/κ),
which has a different weight than that used in <cit.> for Brinkman flows. We also define, following the developments in <cit.>, the additional total pressure field
φ:= - λ + α p.
In order to rewrite the fluid momentum balance in terms of the rescaled filtration vorticity (<ref>), we employ the following vector identity, valid for a generic
vector field :
= -Δ + ∇().
These steps, together with a rescaling of the external force = ν/κ,
lead to the following equations (mixture momentum, constitutive equation for total pressure, fluid momentum, constitutive equation for filtration vorticity, and mixture mass balance)
-(2μ() - φ) = ,
-1/λφ + α/λ p - = 0,
1/κ + √(ν/κ) - ν/κ∇() + ∇ p = ,
- +√(ν/κ) = ,
- (c_0 + α^2/λ) p + α/λφ - = g;
and the boundary conditions
now read
= and · = 0 and × = on Γ.
§.§ Weak formulation
Let us assume that ,∈^2(Ω), g ∈^2(Ω)
and that all other model coefficients are positive constants.
We proceed to multiply the governing equations by suitable test functions and to integrate by parts over the domain.
Note that for the divergence-based terms we appeal to the usual form of the Gauss formula, whereas for curl-based terms we use the following result from, e.g., <cit.>:
∫_Ω· = ∫_Ω· + ∫_∂Ω (×)· ∀∈(,Ω), ∈^1(Ω),
together with the scalar triple product identity
(×)· = ·(×).
We observe in advance that equations (<ref>),(<ref>) suggest that, in the limit of c_0→ 0, λ→∞, both pressures are not uniquely defined and so we require to add a constraint on their mean value (to be zero, for example). We then arrive at the following weak formulation for (<ref>): Find (,,,φ,p)∈^1_0(Ω)×_0(,Ω)×_0(,Ω) ×^2_0(Ω) ×^2_0(Ω)
such that
2μ∫_Ω(): () - ∫_Ωφ =
∫_Ω· ∀∈^1_0(Ω),
1/κ∫_Ω· + √(ν/κ)∫_Ω·
+ ν/κ∫_Ω -∫_Ω p = ∫_Ω· ∀∈_0(,Ω),
-∫_Ω· + √(ν/κ)∫_Ω· = 0 ∀∈_0(,Ω),
-1/λ∫_Ωφ ψ + α/λ∫_Ω pψ - ∫_Ωψ = 0 ∀ψ∈_0^2(Ω),
- (c_0+ α^2/λ) ∫_Ω p q+ α/λ∫_Ωφ q - ∫_Ω q = ∫_Ω g q ∀ q∈_0^2(Ω),
where we have also used the boundary conditions (<ref>).
Let us now define the following bilinear forms and linear functionals
a_1 (,) := 2μ∫_Ω(): (),
b_1(,ψ) : = - ∫_Ωψ, b̂_1(,q) : = - ∫_Ω q,
a_2(,):= 1/κ∫_Ω· +ν/κ∫_Ω , b_2(,) : = √(ν/κ)∫_Ω·,
a_3(,):= ∫_Ω·,
a_4(φ,ψ):= 1/λ∫_Ωφ ψ,
b_3(q,ψ):= α/λ∫_Ω q ψ,
a_5(p,q):= (c_0+ α^2/λ) ∫_Ω p q,
B():= ∫_Ω·,
F():= ∫_Ω·, G(q):= ∫_Ω g q,
with which (<ref>) is rewritten as follows:
Find (,,,φ,p)∈^1_0(Ω)×_0(,Ω)×_0(,Ω) ×_0^2(Ω) ×_0^2(Ω)
such that
a_1(,) + b_1(,φ) = B() ∀∈^1_0(Ω),
a_2(,) + b_2(,) + b̂_1(,p) = F() ∀∈_0(,Ω),
b_2(,) - a_3(,) = 0 ∀∈_0(,Ω),
b_1(,ψ) - a_4(φ,ψ) + b_3(p,ψ) = 0 ∀ψ∈_0^2(Ω),
b̂_1(,q) + b_3(q,φ) - a_5(p,q) = G (q) ∀q∈_0^2(Ω).
§ SOLVABILITY ANALYSIS
§.§ Preliminaries
The well-posedness analysis for (<ref>) will be put in the framework of the abstract Banach–Nečas–Babuška theory, which we state next (see, e.g., <cit.>).
Let (E_1, ·_E_1) be a reflexive Banach space, (E_2,·_E_2) a Banach space, and T:E_1→ E_2' a bounded, linear form satisfying the followings conditions:
(BNB1) For each y∈ E_2∖{0}, there exists x∈ E_1 such that
⟨ T(x), y⟩_E_2',E_2≠ 0.
(BNB2) There exists c>0 such that
T(x) _E_2'≥ cx_E_1 for all x∈ E_1.
Then, for every x^*∈ E_2' there exists a unique x∈ E_1 such that
T(x)=x^*.
Let us first consider the product space
:= ^1_0(Ω)×_0(,Ω) ×_0(,Ω) ×_0^2(Ω) ×_0^2(Ω),
and, using the notation :=(,,,φ,p)∈, we proceed to equip this space with the norm
_^2 := 2μ()^2_0,Ω+ 1/κ^2_0,Ω + ν/κ^2_0,Ω +^2_0,Ω +ν^2_0,Ω
+ c_0 p^2_0,Ω + 1/λφ + α p^2_0,Ω.
Let us also introduce the bilinear form
⟨𝒜_ϵ(x⃗),y⃗⟩ :=a_1(,)+b_1(,φ)+a_2(,)+b_2(,)+b̂_1(,p)+b_2(,)-a_3(,)
+b_1(,ψ)-a_4(φ,ψ)+b_3(p,ψ)+b̂_1(,q)+b_3(q,φ)-a_5(p,q),
induced by the operator 𝒜_ϵ:→'
(where the subscript ϵ indicates dependence with respect to the model parameters κ,α,μ,ν,c_0,λ), and again we emphasise that we have different scalings than those used in <cit.>.
From the Cauchy–Schwarz inequality we readily have the following bounds for the bilinear forms in (<ref>)
a_1(,)≤ 2μ ||_1,Ω||_1,Ω, b_1(,φ) ≤‖φ‖_0,Ω ||_1,Ω, b̂_1(,q)≤‖ q‖_0,Ω‖‖_0,Ω,
a_2(,)≤1κ‖‖_0,Ω‖‖_0,Ω+νκ‖‖_0,Ω‖‖_0,Ω,
b_2(,)≤√(ν/κ)‖‖_0,Ω,‖‖_0,Ω,
a_3(,)≤‖‖_0,Ω‖‖_0,Ω,
a_4(φ,ψ)≤1λ‖φ‖_0,Ω‖ψ‖_0,Ω,
b_3(q,ψ)≤α/λ‖ q‖_0,Ω‖ψ‖,
a_5(p,q)≤( c_0+α^2λ)‖ p‖_0,Ω‖ q‖_0,Ω,
for all ,∈^1(Ω), ,∈(,Ω), ,∈(,Ω), φ,ψ,p,q ∈^2(Ω).
Consider the bilinear form defined in (<ref>).
For all x⃗∈, there exists y⃗∈ such that
⟨𝒜_ϵ(x⃗),y⃗⟩≳x⃗^2_ and y⃗_≤√(2)x⃗_.
Using first and second Young's inequalities it is not difficult to assert that
(,)_0,Ω≤1√(κ)‖‖_0,Ω^2+√(κ)4‖‖_0,Ω^2 and (q,ψ)_0,Ω≤ε2‖ q‖_0,Ω^2+12ε‖ψ‖_0,Ω^2.
Next, for a given z⃗:=(,,,ψ,q) we can construct
y⃗:=(,+1/2√(κν),-,-ψ,-q ).
We then invoke (<ref>), so that we can ensure that
⟨𝒜_ϵ(z⃗),y⃗⟩ =2μ ||_1,Ω^2+1κ‖‖_0,Ω^2+√(ν)2√(κ)(,)_0,Ω+νκ‖‖_0,Ω^2+ν2‖‖_0,Ω^2
+‖‖_0,Ω^2+1λ‖ψ‖_0,Ω^2-αλ(q,ψ)_0,Ω-αλ(q,ψ)+( c_0+α^2λ) ‖ q‖_0,Ω^2
≥ 2μ ||_1,Ω^2+1κ‖‖_0,Ω^2+√(ν)2√(κ)(,)_0,Ω+νκ‖‖_0,Ω^2+ν2‖‖_0,Ω^2+‖‖_0,Ω^2
+( c_0+α^2λ-εα2λ) ‖ q ‖_0,Ω^2+(1λ-α2λε)‖ψ‖_0,Ω^2.
Now, taking ε:=5α4+λ c_0α we can deduce that
c_0+α^2λ-εα2λ
≥38( c_0+α^2λ),
1λ-α2λε
=14+λ c_0α^254+λ c_0α^21λ
≥151λ,
and using (<ref>) and (<ref>) we readily obtain the bound
⟨𝒜_ϵ(z⃗),y⃗⟩≥110_^2.
Finally, the definition of the preliminary -norm (<ref>) and triangle inequality yield the estimates
_^2 =2μ ||_1,Ω^2+1/κ‖+√(κν)2‖_0,Ω^2+ν^2κ‖‖_0,Ω^2+ ν‖‖_0,Ω^2+‖‖_0,Ω^2
+( c_0+α^2λ)‖ q ‖_0,Ω^2+1λ‖ψ‖_0,Ω^2
≤ 2μ ||_1,Ω^2+2νκ‖‖_0,Ω^2+ν^2κ‖‖_0,Ω^2+ 3ν2‖‖_0,Ω^2+‖‖_0,Ω^2
+( c_0+α^2λ)‖ q ‖_0,Ω^2+1λ‖ψ‖_0,Ω^2
≤ 2_^2,
wich completes the proof.
Using the Banach–Nečas–Babuška result, from (<ref>) and (<ref>) we immediately conclude that problem (<ref>) has a unique solution (,,,φ,p) in the space . However we can note that the bound on (,,,φ,p) in the -norm will degenerate with ε (tending either to zero or to infinity).
As a preliminary result required in the sequel, we recall the following classical inf-sup condition for the bilinear form b̂_1 (which coincides with that of the Stokes problem, see, e.g., <cit.>).
There exists β_1>0 such that
sup_∈^1_0(Ω)∖{0}b̂_1(ψ, )||_1,Ω= sup_∈^1_0(Ω)∖{0}-(ψ,)_0,Ω||_1,Ω≥β_1 ‖ψ‖_0,Ω for all ψ∈_0^2(Ω).
§.§ Parameter-robust well-posedness
Before addressing the well-posedness of (<ref>) robustly with respect to ε, we first note that the bilinear forms defining the solution operator suggest to modify the metric in and include the following particular parameter-weighting of the functional spaces
_ϵ := 2μ^1_0(Ω)×[κ^-1/2^2(Ω)∩√(ν/κ)_0(,Ω)]×[^2(Ω)∩√(ν)_0(,Ω)]
×[ 1/λ^2(Ω)∩1/2μ_0^2(Ω)] ×[
√(c_0)^2(Ω)∩(√(κ)^1_0(Ω) + √(κ/ν)_0^2(Ω))
].
An important observation is that _ϵ contains the same vectors that are in , but which are bounded in the norm ·_ϵ to be defined later.
Note also that, proceeding similarly as in <cit.> (see also <cit.>), we have decomposed the space for fluid pressure as the sum
_0^2(Ω)=√(κν)_0^2(Ω) + √(κ)^1_Γ(Ω)∩_0^2(Ω),
and have endowed it with the norm ·_r defined, thanks to (<ref>), as follows
‖ q‖^2_r:=inf_s∈√(κ)^1_0(Ω)∩_0^2(Ω){(κν+c_0) q-s_0,Ω^2+c_0 s_0,Ω^2+‖√(κ)∇ s‖_0,Ω^2 }.
With this norm we see that, for example, the boundedness of the bilinear form b̂_1(·,·) can be written as follows
(q, )_0,Ω =(q-s+s, )_0,Ω=-(∇ s, )_0,Ω+(q-s, )_0,Ω
≤{κ|s|_1,Ω^2+κν‖ q-s‖_0,Ω^2 }^1/2{1κ_0,Ω^2+νκ‖‖_0,Ω^2 }^1/2,
for all s ∈_0^1(Ω), so from (<ref>) we have
(q, )_0,Ω≤‖ q‖_r{1κ_0,Ω^2+νκ‖‖_0,Ω^2 }^1/2.
On the other hand, we have the following robust-in-ϵ inf-sup condition for b̂_1(·,·).
There exists β_0>0 independent of the parameters in ϵ, such that
sup_∈_0(, Ω)∖{0}-(q,){1κ_0,Ω^2+νκ‖‖_0,Ω^2 }^1/2≥β_0 ‖ q‖_r for all q∈_0^2(Ω).
Owing to <cit.>, we know that there exists β_1>0 (independent of the parameters) such that
‖∇ q‖_-1,Ω≥β_1 ‖ q ‖_0,Ω for all q∈_0^2(Ω).
Then, for the operator ∇^-1:∇_0^2(Ω)→_0^2(Ω) (where ∇_0^2(Ω) is a closed subspace of
^-1(Ω)), we can deduce that
‖∇^-1‖_ℒ(∇_0^2(Ω),_0^2(Ω))≤β_1^-1.
Using the Poincaré inequality we can find a positive constant c:=c(Ω) such that
‖ q‖_0,Ω≤ c|q|_1,Ω for all q∈ H^1(Ω)∩_0^2(Ω),
or, equivalently,
‖∇^-1‖_ℒ(∇ (H^1(Ω)∩_0^2(Ω)),H^1(Ω)∩_0^2(Ω))≤ c.
Then, we have that
‖∇ q ‖_^2(Ω)+ν^-1/2^-1(Ω)≥max(c,β_1^-1)inf_q=q_1+q_2
q_1∈_0^1(Ω)∩_0^2(Ω),
q_2∈_0^2(Ω){ |q_1|_1,Ω^2+1ν‖ q_2‖_0,Ω^2 }^1/2.
Multiplying (<ref>) by √(κ) and applying algebraic manipulations, we can conclude that the inf-sup condition (<ref>) holds with β_0:=max(c,β_1^-1).
As done in the previous subsection, the unique solvability analysis will also follow from the Banach–Nečas–Babuška theory, but now using the space _ϵ endowed with the new norm
_ϵ^2:=_^2+p_r^2+12μφ_0,Ω^2.
Problem (<ref>) is written as: Find ∈_ϵ such that
⟨_ϵ(), ⟩ = () for all ∈_ϵ,
or in operator form as follows
_ϵ() = in '_ϵ,
and the norm of the solution operator is defined as
_ϵ_ℒ(_ϵ,_ϵ') :=
sup_, ∈_ϵ∖{}|⟨(),⟩|/_ϵ_ϵ.
Translating Theorem <ref> to the present context, we aim to prove that the operator _ϵ is continuous, that is
⟨_ϵ(),⟩≲_ϵ_ϵ ∀,∈_ϵ,
and that the following global inf-sup condition is satisfied
sup_∈_ϵ∖{}⟨_ϵ(),⟩/_ϵ≳_ϵ ∀∈_ϵ.
Let ‖·‖_ϵ be defined as in (<ref>).
Then the bilinear form induced by 𝒜_ϵ (cf. (<ref>)) is continuous and inf-sup stable under the norm
‖·‖_ϵ, i.e., the conditions (<ref>) and (<ref>) are satisfied.
For the continuity of 𝒜_ϵ, it suffices to use (<ref>), the norm definition (<ref>),
Cauchy–Schwarz inequality,
the definition of 𝒜_ϵ, and (<ref>), to arrive at
⟨𝒜_ϵ( ),⟩≤ 2 x⃗_ϵy⃗_ϵ.
For the global inf-sup, we take a given ∈_ϵ, and for lemma <ref>, there exists _1∈ such that
⟨𝒜_ϵ(x⃗),y⃗_1 ⟩≥110x⃗^2_ and _1 _≤√(2)_.
Using Lemma <ref> and Lemma <ref> we can find _2, _3 and constants C_1, C_2, Ĉ_1, Ĉ_2 such that
-(p,_2)≥ C_1 ‖ p ‖_r^2 and {1κ‖_2 ‖_0,Ω+νκ‖_2 ‖_0,Ω^2 }^1/2≤ C_2 ‖ p ‖_r.
and
-(φ,)≥Ĉ_1 1/2μ‖φ‖_0,Ω^2 and √(2μ)|_3 |_1,Ω≤Ĉ_21/√(2μ)‖φ‖_0,Ω.
Taking _2:=(0,_2,0,0,0), _3:=(_3,0,0,0,0), δ>0 and δ̂>0 we have that
⟨𝒜_ϵ(x⃗),10y⃗_1+δ_2+δ̂_3 ⟩
≥x⃗^2_+δ(a_2(,_2)+b_2(,_2)+b̂_1(_2,p))+δ̂(a_1(,_3)+b_1(_3,φ))
≥x⃗^2_-δ{1κ‖‖_0,Ω+νκ‖‖_0,Ω^2 }^1/2{1κ‖_2 ‖_0,Ω+νκ‖_2 ‖_0,Ω^2 }^1/2
-δν√(κ)‖‖_0,Ω‖_2 ‖_0,Ω+δ C_1 ‖ p ‖_r^2-2μδ̂ ||_1,Ω|_3|_1,Ω+δ̂Ĉ_1/2μ‖φ‖_0,Ω^2
≥x⃗^2_-12(1κ‖‖_0,Ω+νκ‖‖_0,Ω^2)-ν2‖‖_0,Ω^2 -μ ||_1,Ω^2 +δ C_1 ‖ p‖_r^2
+δ̂Ĉ_1/2μ‖φ‖_0,Ω^2-δ^21κ‖_2 ‖_0,Ω-δ^2ν2κ‖_2 ‖_0,Ω^2-2μδ̂^2|_3|_1,Ω^2
≥12x⃗^2_+δ( C_1-δ C_2^2) ‖ p‖_r^2+δ̂1/2μ(Ĉ_1-δ̂Ĉ_̂2̂^2 )‖φ‖_0,Ω^2.
Then, choosing δ:=C_12C_2^2 and δ̂:=Ĉ_12Ĉ_2^2, we can deduce the estimates
⟨𝒜_ϵ(x⃗),10y⃗_1+δ_2+ δ̂_3 ⟩≥12x⃗^2_+C_1^24C_2^2‖ p‖_r^2+Ĉ_1^24Ĉ_2^21/2μ‖φ‖_0,Ω^2≥12min{1,C_1^22C_2^2,Ĉ_1^22Ĉ_2^2}x⃗^2_ϵ,
‖ 10y⃗_1+δ_2+δ̂_3 ‖_ϵ ≤ 10√(2)‖‖_ϵ+δ C_2 ‖ p ‖_r+δ̂Ĉ_21/√(2μ)‖φ‖_0,Ω≤max{10 √(2),C_12C_2,Ĉ_12Ĉ_2}‖‖_ϵ.
And from these relations we can conclude that:
sup_∈_ϵ∖{}⟨_ϵ(),⟩/_ϵ≥⟨_ϵ(),10y⃗_1+δ_2+δ̂_3 ⟩/10y⃗_1+δ_2+δ̂_3 _ϵ≥12min{1,C_1^22C_2^2,Ĉ_1^22Ĉ_2^2}x⃗^2_ϵmax{10 √(2),C_12C_2,Ĉ_12Ĉ_2}‖‖_ϵ≳‖‖_ϵ.
§.§ Operator preconditioning
We recall from, e.g., <cit.>,
that since _ϵ maps _ϵ to its dual, when solving the discrete version of (<ref>), iterative methods are not directly applicable unless a modified problem is considered, for example
_ϵ() = in _ϵ,
where :_ϵ'→_ϵ is an appropriately defined isomorphism. As usual, one can take as the Riesz map (self-adjoint and positive definite) whose inverse defines a scalar product (·,·)__ϵ on _ϵ, and the operator _ϵ is also self-adjoint with respect to this inner product. Then
⟨_ϵ(),⟩ = (_ϵ (), )__ϵ and _ϵ ()__ϵ =_ϵ () __ϵ',
and therefore, using the definition of the operator norms, it is readily deduced that
_ϵ_(_ϵ,_ϵ)= _ϵ_(_ϵ,_ϵ') and (_ϵ)^-1_(_ϵ,_ϵ)= _ϵ^-1_(_ϵ,_ϵ').
Then, if an appropriate metric is chosen such that the norms of _ϵ and of _ϵ^-1 are bounded by constants independent of the model parameters ϵ, then the condition number of the preconditioned system will also be independent of the model parameters.
Proceeding similarly as in <cit.>, for example, in our case we consider the following block-diagonal preconditioners (focusing on the case of mixed boundary conditions)
ℬ_1 = [ (-(2μ))^-1 0 0 0 0; 0 (κ^-1 +κ^-1∇)^-1 0 0 0; 0 0 ( + ν)^-1 0 0; 0 0 0 ((1/λ+1/2μ) I)^-1 0; 0 0 0 0 ((c_0 + α^2/λ+κ) I )^-1 ],
ℬ_2 = [ (-(2μ))^-1 0 0 0 0; 0 (κ^-1 + ν/κ∇)^-1 0 0 0; 0 0 ( + ν)^-1 0 0; 0 0 0 ((1/λ+1/2μ) I )^-1 0; 0 0 0 0 ((c_0 + α^2/λ
) I - κΔ)^-1 ],
ℬ_3 = [ (-(2μ))^-1 0 0 0 0; 0 (κ^-1 + (1+ν/κ)∇)^-1 0 0 0; 0 0 ( + ν)^-1 0 0; 0 0 0 2c2*ℬ; 0 0 0 ],
with
ℬ = [ (1/λ+1/2μ) I α/λ I; α/λI (1 + c_0 + α^2/λ) I ]^-1
+ [ (1/λ+1/2μ) I α/λ I; α/λI (c_0 + α^2/λ) I -κΔ ]^-1.
Note that only _3 results from the Riesz map corresponding to _ϵ with the complete norm as in (<ref>), while _1,_2 are
approximations of _3. In particular, _1 simply considers the parameter weighting suggested by the weak formulation (<ref>) combined with the Riesz map associated with the natural regularity of that formulation, and _2 includes also the sum of spaces leading to the pressure Laplacian forms which are key in achieving robustness for Darcy-type problems <cit.>. The full form _3 also includes the non-standard Brezzi–Braess type of block ℬ for total and fluid pressures, which is needed in perturbed saddle-point problems with penalty as proposed in <cit.> (see also <cit.>).
§ ANALYSIS OF A FINITE ELEMENT METHOD
Let _h denote a family of tetrahedral meshes (triangular in 2D) on Ω and denote by _h the set of all facets (edges in 2D) in the mesh.
By h_K we denote the diameter of the element K and by h_F we denote the length/area of the facet F. As usual, by h we denote the maximum of the diameters of elements in _h. For all meshes we assume that they are sufficiently regular (there exists a uniform positive constant η_1 such that each element K is star-shaped with respect to a ball of radius greater than η_1 h_K). It is also assumed that there exists η_2>0 such that for each element and every facet F∈∂ K, we have that h_F≥η_2 h_K, see, e.g., <cit.>. By _k (Θ) we will denote be the space of polynomials of total degree at most k defined locally on the domain Θ, and denote by _k(Θ) and ℙ_k(Θ) their vector- and tensor-valued counterparts, respectively.
By _h we will denote the set of all facets and will distinguish between facets lying on the interior and the two sub-boundaries _h = _h^int∪_h^Γ.
For smooth scalar fields w defined on _h, the symbol w^± denotes the traces of on e that are the extensions from the interior of the two elements K^+ and K^- sharing the facet e. The symbols · and · denote, respectively, the average and jump operators defined as
w := 1/2 (w^-+w^+), w := (w^- - w^+). The element-wise action of a differential operator is denoted with a subindex h, for example, ∇_h will denote the broken gradient operator.
The discrete spaces that we consider herein correspond, for k≥ 0, to the generalised Taylor–Hood element pair (𝐏_k+2-P_k+1) for the displacement / total pressure approximation, the H(div)-conforming Raviart–Thomas elements of degree k (denoted 𝐑𝐓_k) for velocity approximation, the H(curl)-conforming Nédélec elements of the first kind and order k+1 (denoted 𝐍𝐃_k+1) for filtration vorticity (see, e.g., <cit.> for precise definitions of these families of spaces), and piecewise polynomials of degree k for the approximation of interstitial pressure
_h := {_h ∈^1_0(Ω): _h|_K ∈𝐏_k+2(K), ∀ K∈_h},
_h := {_h ∈_0(,Ω): _h|_K ∈𝐑𝐓_k(K), ∀ K∈_h},
_h := {_h∈_0(,Ω): _h|_K ∈𝐍𝐃_k+1(K), ∀ K∈_h},
^m_h := {_h ∈^2(Ω): _h|_K ∈𝐏_m(K), ∀ K∈_h},
_h := {u_h ∈^1_0(Ω): u_h|_K ∈P_k(K), ∀ K∈_h},
_h := {ψ_h ∈C^0(Ω):
ψ_h|_K ∈P_k+1(K), ∀ K∈_h},
_h := {q_h ∈^2(Ω): q_h|_K ∈P_k(K), ∀ K∈_h}.
Note that other combinations of finite element families are feasible as well (as long as appropriate discrete inf-sup conditions are satisfied). In 2D we will consider the same space _h for both total and fluid pressures and we will use the 𝐏_2-P_0 pair for displacement and total pressure approximation.
The (conforming) finite element scheme associated with (<ref>) reads: Find
(_h,_h,_h,φ_h,p_h)∈_h := _h×_h×_h ×_h ×_h,
such that
a_1(_h,_h) + b_1(_h,φ_h) = B(_h) ∀_h∈_h,
a_2(_h,_h) + b_2(_h,_h) + b̂_1(_h,p_h) = F(_h) ∀_h∈_h,
b_2(_h,_h) - a_3(_h,_h) = 0 ∀_h∈_h,
b_1(_h,ψ_h) - a_4(φ_h,ψ_h) + b_3(p_h,ψ_h) = 0 ∀ψ_h∈_h,
b̂_1(_h,q_h) + b_3(q_h,φ_h) - a_5(p_h,q_h) = G (q_h) ∀q_h∈_h.
Similarly as in the continuous case, we define
_ϵ,h := 2μ_h ×[ √(1/κ)_h^k+1∩√(νκ)_h]× [_h^k+1∩√(ν)_h]×[√(1λ)_h∩√(12μ)_h ]
×[ √(c_0)_h∩(√(κν)_h+ √(κ)_h ) ],
and the associated discrete norm is
_h_,h^2 := 2μ(_h)^2_0,Ω+ 1/κ_h^2_0,Ω + ν/κ_h^2_0,Ω +_h^2_0,Ω +ν_h^2_0,Ω
+ 1/2μφ_h_0,Ω^2 + inf_s_h ∈_h[(c_0+κ/ν)p_h - s_h_0,Ω^2 +c_0s_h^2+ √(κ)∇_hs_h^2_0,Ω]
+ 1/λφ_h + α p_h^2_0,Ω + c_0 p_h^2_0,Ω .
As in the continuous case, now problem (<ref>) is written as: Find _h∈_ϵ,h such that
⟨_ϵ(_h), _h⟩ = _h(_h) for all _h∈_ϵ,h,
or in operator form as follows
_ϵ(_h) = _h in '_ϵ,h.
There exists β_1>0 independent of the parameters in ϵ and h, such that
sup__h∈_h∖{0}-(q_h,_h)_0,Ω{1κ_h_0,Ω^2+νκ‖_h‖_0,Ω^2 }^1/2≥β_1 ‖ q_h‖_r for all q_h∈_h.
The proof requires to assume that the continuous inf-sup condition holds. Then, similarly to the proof of that result (Lemma <ref>), the first part (steps until (<ref>)) is a consequence of the fact that the spaces _h and _h satisfy the usual discrete inf-sup condition for the Stokes problem. Then, it suffices to follow the scaling argument in (<ref>), also valid at the discrete level, to complete the desired condition.
Analogously to the continuous case, we need the following conditions to be satisfied to guarantee existence and uniqueness to problem (<ref>):
⟨_ϵ(_h),_h ⟩ ≲_h_ϵ_h_ϵ,h ∀_h,_h ∈_ϵ,h,
sup__h ∈_ϵ,h∖{}⟨_ϵ(_h),_h ⟩/_h _ϵ,h ≳_h_ϵ,h ∀_h ∈_ϵ,h.
Let ‖·‖_ϵ,h be defined as in (<ref>).
Then the bilinear form induced by 𝒜_ϵ (cf. (<ref>)) is continuous and inf-sup stable under the norm
‖·‖_ϵ,h, i.e., the conditions (<ref>) and (<ref>) are satisfied.
We use again that the pairs (_h,_h) and (_h,_h) are inf-sup stable spaces for the usual bilinear form in Stokes problem, which ensures that we can find discrete versions _h,2, _h,3 of the tuples _2 and _3, respectively, constructed in Theorem <ref>. We also note that _h ⊂_h, and therefore we can prove the discrete version of Lemma <ref>. Then the desired result is a consequence of repeating the arguments used in the proof of Theorem <ref>.
For given b,f∈^2(Ω) and g∈ L^2(Ω), problem (<ref>) has a unique solution (_h,_h,_h,φ_h,q_h) ∈_ϵ,h.
In addition, the solution satisfies the following continuous dependence on data
‖ (_h,_h,_h,φ_h,p_h)‖_ϵ≲ (b_0,Ω+_0,Ω+g_0,Ω),
and the following Céa estimate
‖ (-_h,-_h,-_h,φ-φ_h,p-p_h)‖_ϵ
≤(1+α^-1‖𝒜_ϵ‖_ℒ(_ϵ,'_ϵ)) ‖ (-_h,-_h,-_h,φ-ψ_h,p-q_h)‖_ϵ,
for all (_h,_h,_h,ψ_h,q_h)∈_ϵ,h, where α is the positive constant associated with (<ref>).
The existence and uniqueness of the solution is obtained in a similar way to its counterpart in the continuous level.
For the corresponding Céa estimate, we proceed to denote as x⃗:=(,,,φ,p), x⃗_⃗h⃗:=(_h,_h,_h,φ_h,p_h) and y⃗_⃗h⃗:=(_h,_h,_h,ψ_h,q_h).
From (<ref>), we can infer that there exists a positive constant α independent of the parameters such that
sup_z⃗_h ∈_ϵ,h∖{0⃗}⟨𝒜_ϵ(x⃗_h),z⃗_h ⟩/z⃗_h _ϵ≥αx⃗_h_ϵ ∀x⃗_h ∈_ϵ,h.
Using the error equation, we readily obtain that ⟨𝒜_ϵ(x⃗),y⃗_h ⟩ =⟨𝒜_ϵ(x⃗_h),y⃗_h ⟩. Furthermore,
since y⃗_h-x⃗_h∈_ϵ,h we can deduce that
‖x⃗-x⃗_h ‖_ϵ ≤‖x⃗-y⃗_h ‖_ϵ+‖y⃗_h-x⃗_h ‖_ϵ
≤‖x⃗-y⃗_h ‖_ϵ+α^-1sup_z⃗_h ∈_ϵ,h∖{0⃗}⟨𝒜_ϵ(y⃗_h-x⃗_h),z⃗_h ⟩/z⃗_h _ϵ
≤‖x⃗-y⃗_h ‖_ϵ+α^-1sup_z⃗_h ∈_ϵ,h∖{0⃗}⟨𝒜_ϵ(y⃗_h-x⃗),z⃗_h ⟩/z⃗_h _ϵ
≤‖x⃗-y⃗_h ‖_ϵ+α^-1sup_z⃗_h ∈_ϵ,h∖{0⃗}‖𝒜_ϵ‖_ℒ(_ϵ,'_ϵ)‖y⃗_h-x⃗‖_ϵ‖z⃗_h ‖_ϵ/z⃗_h _ϵ
≤ (1+α^-1‖𝒜_ϵ‖_ℒ(_ϵ,'_ϵ)) ‖x⃗-y⃗_h ‖_ϵ,
which finishes the proof.
Let us recall, from, e.g., <cit.>, the following approximation properties of the
finite element subspaces (<ref>), which are obtained using the classical interpolation theory.
Assume that (,,,φ,p) ∈^1+s(Ω)×^s(,Ω)×^s(,Ω)×H^s(Ω) ×H^s(Ω), for some
s∈(1/2,k+1]. Then there exists C>0,
independent of h, such that
(-ℐ_h()) _0,Ω ≤ C h^s| |_1+s,Ω,
‖ p -Π_h (p) ‖_0,Ω ≤ C h^s|p |_s,Ω,
-ℐ_h^RT()_0,Ω ≤ C h^s||_s,Ω ,
-ℐ_h^N()_0,Ω ≤ C h^s||_s,Ω,
where Π_h:_0^2→_h⊂_h is the L^2-projection and ℐ_h:^1(Ω)→_h, ℐ_h^RT:_0(,Ω)→_h, ℐ_h^N:_0(,Ω)→_h are the Lagrange, Raviart–Thomas and Nédelec interpolators respectively.
Let (,,,φ,p)∈_ϵ and (_h,_h,_h,φ_h,p_h)∈_ϵ,h be the unique solutions to the continuous and discrete problems (<ref>) and (<ref>), respectively.
Assume that (,,,φ,p) ∈^1+s(Ω)×^s(,Ω)×^s(,Ω)×H^s(Ω) ×H^s(Ω), for some s ∈ (1/2, 1]. Then, there
exists C > 0, independent of h and the parameters ϵ, such that
‖ (-_h,-_h,-_h,φ-φ_h,p-p_h)‖_ϵ≤ Ch^s‖ (,,,φ,p)‖_s,ϵ,
where
‖ (,,,φ,p)‖_s,ϵ^2 :=
2μ||_1+s,Ω^2+1/κ||_s,Ω^2+ν/κ||_s,Ω^2+||_s,Ω^2+ν||_s,Ω^2
+1/2μ|φ|_s,Ω^2+(c_0+κ/ν)|p|_s,Ω^2+1/λ|φ+α p|_s,Ω^2.
This result follows immediately after choosing the tuple
(_h,_h,_h,ψ_h,q_h):=(ℐ_h(),ℐ_h^RT(),ℐ_h^N(),Π_h(φ),Π_h(p)),
in Theorem <ref>, and then using the estimates (<ref>)-(<ref>) together with the following commuting properties of the Raviart–Thomas and Nédélec operators
ℐ_h^RT()=Π_h(), ℐ_h^N()=Π_h().
Finally, it suffices to invoke the fact that ‖ q ‖_r≤(κ/ν+c_0 )‖ q ‖_0,Ω.
§ NUMERICAL VERIFICATION
The aim of this section is to experimentally validate the theoretical results presented in Section <ref>.
We certify by error convergence verification the proposed finite element method, and then we apply the formulation in the simulation of a representative viscous flow in a poroelastic channel. We also evaluate the robustness of the preconditioners in (<ref>). The numerical implementation uses the open-source finite element framework <cit.>, and is available in the public domain <cit.>. For the solution of the linear systems in the accuracy verification tests we employ the sparse direct method MUMPS, while in the preconditioner tests we use the preconditioner MINRES iterative solver; see the corresponding section below for the full details underlying the set up of the preconditioner tests.
§.§ Accuracy tests
Let us consider the unit square domain Ω = (0,1)^2 together with the manufactured solutions
(x,y) = [ sin(π [x+y]); cos(π[x^2+y^2]) ], p(x,y) = sin(π x+y)sin(π y),
(x,y) = [ sin(π x)sin(π y); cos(π x)cos(2π y) ], ω(x,y) = ν/√(κ)curl , φ(x,y) = -λ + α p.
The model parameters assume the arbitrary values ν = 1, λ = 1, μ = 1, κ = 1, c_0=1, α = 1;
and the loading and source terms, together with essential boundary conditions are computed from the manufactured solutions above. The mean value of fluid and total pressure is prescribed to coincide with the mean values of the manufactured pressures, which is implemented by means of a real Lagrange multiplier. Sequences of successively refined uniform tetrahedral meshes are used to compute approximate solutions and to generate the error history (error decay with the mesh size h and experimental convergence rates, using norms in non-weighted spaces for each individual unknown and at each refinement level). We display the error history in Table <ref>, where the method confirms an asymptotic optimal convergence of O(h^k+1) for each variable and for both polynomial degrees. We remark that for this test we use continuous and piecewise polynomials of degree k+2 for displacement and discontinuous piecewise polynomials of degree k for total pressure. Note that in 2D, the filtration vorticity is a scalar field ω = ν/√(κ)curl and the appropriate functional space is ^1_Γ(Ω). In the discrete setting we then select
W_h : = {ω_h ∈^1_Γ(Ω): ω_h|_K ∈P_1(K), ∀ K∈_h}.
Moreover, to strengthen the numerical evidence, in the last column of the table we report the (L^2-projection onto _h of the) loss of mass
loss_h : = -(c_0+α^2/λ)p_h+α/λφ_h-(_h)-g,
for each refinement level, confirming a satisfaction of the mass conservation equation on the order of machine accuracy. This test has been performed using pure Dirichlet boundary conditions.
We conduct a second test of convergence, now in the unit cube domain Ω = (0,1)^3 and with mixed boundary conditions (displacement, normal velocity and tangential vorticity are prescribed on the sides x=0, y=0, z=0 and known normal stress, flux, and pressures are imposed on the remainder of the boundary) considering closed-form solutions to the vorticity-based Biot–Brinkman equations as follows
(x,y,z) =1/10[ sin(π [x+y+z]); cos(π[x^2+y^2+z^2]); sin(π [x+y+z])cos(π [x+y+z]) ], p(x,y,z) = sin(π x)cos(π y)sin(π z),
(x,y,z) = [ sin^2(π x)sin(π y)sin(2π z); sin(π x)sin^2(π y)sin(2π z); -[sin(2π x)sin(π y)+sin(π x)sin(2 π y)]sin^2(π z) ], (x,y,z) = ν/√(κ),
φ(x,y,z) = -λ + α p.
The 3D example uses generalised Taylor–Hood elements for the displacement/total pressure pair as specified in (<ref>). This time we consider the parameter values
ν = 0.1, λ = 100, μ = 10, κ = 10^-3, c_0=0.1, α = 0.1,
and in Table <ref> we only show the error decay measured in the total weighted norm, indicating an asymptotically optimal h^k+1 order of convergence anticipated by Theorem <ref>.
For sake of illustration, we depict in Figure <ref> the approximate solutions obtained with the lowest-order method, after 4 levels of uniform refinement.
§.§ Fluid injection in an elastically deformable porous channel
Next we investigate the flow patterns of infiltration of a poroelastic channel having an irregular array of eight circular cylinders. The problem setup mimics the behaviour of sponge-like materials or soils in the presence of macro-pores, for example <cit.>. The undeformed body occupies the rectangular domain Ω = (0,1.6) ×(0,1) (in m^2).
The boundary conditions are of mixed type and do not coincide exactly with those analysed in the manuscript. The left segment is considered an inflow boundary where we set zero displacements, a parabolic profile as inflow of filtration flux, a compatible vorticity with the inflow filtration velocity, and
= , · = v_0 y(1-y), ω = -v_0ν/√(κ)(1-2y), on Γ_in;
on the horizontal boundaries we set
() = , φ = 0,
· = 0, ω = 0, on Γ_wall;
on the holes we impose
= ,
· = 0, ω = 0, on Γ_cyl;
and the boundary conditions are completed by prescribing zero traction and a vanishing pressure on the outlet region
() = , φ = 0,
p = 0, on Γ_out.
We do not consider external volume forces nor fluid sources, therefore = =, g=0.
The filtration flux modulation is time-dependent v_0 = 5/2[t + sin^2(π t)] m/s, and the remaining physical parameters are all constant and assuming the values
μ = 210 Pa, λ = 1'800 Pa, ν = 1.1· 10^-3 Pa s, κ = 3.5·10^-6 m^2, α = 0.95, c_0 = 10^-2 Pa.
For a transient computation we incorporate the time derivatives of fluid and total pressure in the mass conservation equation, which are approximated using backward Euler's method with a constant time step Δ t = 0.01 s and the simulation runs until t = 4 s.
The numerical solutions obtained with a second-order scheme (setting k=1, for which the method consists of 1'631'439 DoFs) are portrayed in Figure <ref>, showing snapshots of the deformed poroelastic region, line integral convolutions of filtration flux, vorticity profile, and total pressure profile.
§.§ Robustness with respect to model parameters
Finally, we proceed to study the robustness of _j^-1 in (<ref>) with respect to varying model parameters and increasing mesh resolution. We thoroughly studied preconditioner robustness by considering several sample values of model parameters μ∈ [1,10^8], λ∈ [1,10^8], ν∈ [10^-6, 1], κ∈[10^-8, 1], α∈[0,1], and c_0∈ [10^-8,1].
These ranges are encountered in typical applications of poromechanics of subsurface flows and of linear Biot consolidation of soft tissues <cit.>.
For the sake of brevity, however, we only report a subset of representative results we obtained with μ=1, α=1, λ={1,10^8}, ν={10^-8, 1}, κ={10^-8, 1}, and c_0={10^-8,1}.
In any case, the software in <cit.> is written such that the reader might also run additional tests with other parameter values as per-required. We consider the unit cube domain discretised into uniform tetrahedral meshes. In particular, we consider four mesh resolutions, from one up to four levels of uniform refinement of the unit cube discretized with two tetrahedra. We use mixed boundary conditions, with Γ being conformed by the faces x=0, y=0, z=0 and Σ the remainder of the boundary.
The action of the different inverses arising in the diagonal blocks of _j^-1 in (<ref>)
is implemented with the UMFPACK direct solver. For each combination of parameter values and mesh resolution, we used these preconditioners to accelerate the convergence of the MINRES iterative solver. Convergence is claimed whenever the Euclidean norm of the (unpreconditioned) residual of the whole system is reduced by a factor of 10^6, and stopped otherwise if the number of iterations reaches an upper bound of 500 iterations.
The (discrete) Laplacian operator required for _2,_3 acts on discontinuous pore pressure approximations so we use (for a given piecewise-defined field η) the following form (see <cit.>)
(-ηΔ_h p_h,q_h)_0,Ω = ∑_K∈_h (η∇_h p_h,∇_hp_h)_0,K + ∑_e∈^int_h⟨η/h_ep_h,q_h⟩_e + ∑_e∈^Γ_h⟨η/h_ep_h,q_h⟩_e,
while for the lowest-order case we only keep the second and third terms on the right-hand side of (<ref>).
In Figure <ref> we show the number of preconditioned MINRES iterations versus number of DoFs for the _1 and _3 preconditioners with the particular parameter value combinations mentioned above. Overall, the results for _2 were significantly worse than that for _1 and _3, and thus omitted for the sake of brevity.
Each of the eight plots in Figure <ref>
contain 4 curves corresponding each to one of the four combinations of κ={10^-8, 1}, and c_0={10^-8,1}. To facilitate the comparison among _1 and _3, the eight plots in Figure <ref> are grouped into four groups of two horizontally adjacent plots each. Each group correspond each to a combination of λ={1,10^8}, ν={10^-8, 1}; the particular combination corresponding to a group is indicated in the title of the two plots in the group.
From Figure <ref>, we observe that the number of MINRES iterations with _3 reaches an asymptotically constant regime with increasing mesh resolution for all combinations of parameter values tested. This is also the case of _1 in the majority of cases, except for ν=1, λ=1, α=1, ν=10^-8, κ=10^-8, and c_0={10^-8,1}, where preconditioner efficiency (number of MINRES iterations) significantly degrades (increases) with mesh resolution. For these combinations of parameters the coupling among the two pressures in the last leading block of _3 seems essential to retain mesh independence convergence. This observation agrees with the experiments in <cit.> for four-field formulations of Biot poroelasticity and simpler problems (including Herrmann elasticity and reaction-diffusion equation), where comparisons were performed against sub-optimal preconditioners.
On the other hand, we observe a relatively low sensitivity of the number of MINRES iterations (and thus robustness) with respect to the value of the model of parameters for both _1 (leaving aside the aforementioned combination of parameter values) and _3. For example,
for ν=1, λ=10^8, α=1, ν=10^-8, _3, and finest mesh resolution, the number of iterations varies between 40 and 50 despite the disparity of scales in the values of c_0 and κ. It is worth noting that
_1 leads to a similar or even lower number of iterations than _3 in most cases, despite it being a computationally cheaper preconditioner. This is the case for ν=1, λ=1, α=1, ν=1, and all combinations of κ and c_0 tested, and ν=1, λ=10^8, α=1, ν=1, κ=10^-8, and c_0=10^-8. The exception is the case of ν=1, λ=10^8, α=1, ν=10^-8, κ=10^-8, and c_0=1, where _3 converges in less iterations than _1.
§ CONCLUDING REMARKS
We have presented a new formulation for the Biot–Brinkman problem using rescaled vorticity and total pressure, and have carried out the stability and solvability analysis of the continuous and discrete problems using parameter-weighted norms. We have derived theoretical error estimates and have subsequently confirmed them numerically; and we have constructed suitable preconditioners that achieve mesh-robustness iteration numbers when varying elastic and porous media flow model parameters. In order to be able to apply these algorithms to more realistic problems one needs to efficiently exploit the vast amount of hardware parallelism available in high-end supercomputers. As future work, we plan to address the parallelization of the algorithms proposed using the GridapDistributed.jl package <cit.>.
Parts of the theoretical framework advanced herein (in particular, the use of a vorticity-based formulation for the filtration equation) extend naturally to more complex setups, for example to the multiple network generalised Biot–Brinkman model from <cit.>. Further improvements to the model and to the theory include the treatment of different types of boundary conditions, and the interfacial coupling with free-flow or with elasticity.
00
amara07 M. Amara, D. Capatina-Papaghiuc, and
D. Trujillo, Stabilized finite element method for
Navier-Stokes equations with physical boundary conditions.
Math. Comp., 76(259) (2007) 1195–1217.
amara04 M. Amara, E. Chacón Vera, and D. Trujillo,
Vorticity–velocity–pressure formulation for Stokes
problem. Math. Comp., 73(248) (2004) 1673–1697.
ambar15 I. Ambartsumyan, E. Khattatov, I. Yotov, and P. Zunino, Simulation of flow in fractured poroelastic media: A comparison of different discretization approaches. In: I. Dimov, I. Faragó, and L. Vulkov (eds). Finite Difference Methods,Theory and Applications. FDM 2014. Lecture Notes in Computer Science, vol 9045. Springer, Cham (2015) 3–14.
anaya18
V. Anaya, M. Bendahmane, D. Mora, and R. Ruiz-Baier,
On a vorticity-based formulation for reaction-diffusion-Brinkman systems.
Netw. Heterog. Media, 13(1) (2018) 69–94.
anaya19
V. Anaya, A. Bouharguane, D. Mora, C. Reales,
R. Ruiz-Baier, N. Seloula and H. Torres,
Analysis and approximation of a
vorticity-velocity-pressure formulation for the Oseen equations.
J. Sci. Comput., 88(3) (2019) 1577–1606.
anaya21
V. Anaya, R. Caraballo, B. Gómez-Vargas, D. Mora, and R. Ruiz-Baier,
Velocity-vorticity-pressure formulation for the Oseen problem with
variable viscosity. Calcolo, 58(4) (2021) e44(1–25).
anaya23
V. Anaya, R. Caraballo, S. Caucao, L.F. Gatica, R. Ruiz-Baier, and I. Yotov, A vorticity-based mixed formulation for the unsteady Brinkman–Forchheimer equations.
Comput. Methods Appl. Mech. Engrg., 404 (2023) e115829(1–30).
anaya15
V. Anaya, G.N. Gatica, D. Mora, and R. Ruiz-Baier,
An augmented velocity-vorticity-pressure formulation for the Brinkman equations.
Int. J. Numer. Methods Fluids, 79(3) (2015) 109–137.
anaya16 V. Anaya, D. Mora, R. Oyarzúa, and
R. Ruiz-Baier, A priori and a posteriori error analysis of a mixed
scheme for the Brinkman problem. Numer. Math., 133(4) (2016)
781–817.
badia14
S. Badia, A. F. Martín, and R. Planas,
Block recursive LU preconditioners for the thermally coupled incompressible inductionless MHD problem.
J. Comput. Phys., 274 (2014), 562–591.
badia20
S. Badia and F. Verdugo, Gridap: An extensible finite element
toolbox in julia. J. Open Source Softw., 5 (2020), 2520.
badia22
S. Badia, A. F. Martín, and F. Verdugo,
GridapDistributed: a massively parallel finite element toolbox in Julia,
J. Open Source Softw., 7(74) (2022), 4157.
baerland20
T. Bærland, M. Kuchta, K.-A. Mardal, and T. Thompson, An
observation on the uniform preconditioners for the mixed Darcy problem,
Numer. Methods PDEs, 36(6) (2020), 1718–1734.
bernardi06 C. Bernardi and N. Chorfi, Spectral
discretization of the vorticity, velocity, and pressure formulation
of the Stokes problem. SIAM J. Numer. Anal., 44(2) (2007) 826–850.
bernardi18 C. Bernardi, S. Dib, V. Girault, F. Hecht, F. Murat, and T. Sayah, Finite element methods for Darcy's problem coupled with the heat equation. Numer. Math., 139 (2018) 315–348.
boon21
W. Boon, M. Kuchta, K.-A. Mardal, and R. Ruiz-Baier, Robust
preconditioners and stability analysis for perturbed saddle-point problems –
application to conservative discretizations of Biot's equations utilizing
total pressure, SIAM J. Sci. Comput., 43 (2021) B961–B983.
braess96 D. Braess, Stability of saddle point problems with penalty. RAIRO Modél. Math. Anal. Numér.,
30 (1996) 731–742.
brezzi91 F. Brezzi and M. Fortin, Mixed and Hybrid Finite Element Methods. Springer-Verlag, 1991.
brown12
J. Brown, M. G. Knepley, D. A. May, L. C. McInnes, and B. Smith,
Composable linear solvers for multiphysics,
In 2012 11th International Symposium on Parallel and Distributed Computing, (2012) 55–62
buerger21 R. Bürger, S. Kumar, D. Mora, R. Ruiz-Baier, and N. Verma, Virtual element methods for the three-field formulation of time-dependent linear poroelasticity. Adv. Comput. Math., 47 (2021) e2.
caraballo23
R. Caraballo, C. W. In, A. F. Martín, R. Ruiz-Baier,
Software used in “Robust finite element methods and solvers for the Biot-Brinkman equations in vorticity form”,
DOI: <https://doi.org/10.5281/zenodo.8085121> (2023)
carrillo19 F.J. Carrillo and I.C. Bourg, A Darcy–Brinkman–Biot approach to modeling the hydrology and mechanics of porous media containing macropores and deformable microporous regions. Water Res. Research, 55 (2019) 8096–8121.
carrillo21 F.J. Carrillo and I.C. Bourg, Capillary and viscous fracturing during drainage in porous media. Phys. Rev. E, 103 (2021) e063106.
chang90 C.L. Chang, and B.-N. Jiang, An error
analysis of least-squares finite element method of
velocity-pressure-vorticity formulation for the Stokes
problem. Comput. Methods Appl. Mech. Engrg., 84(3) (1990) 247–255.
chen20 S. Chen, Q. Hong, J. Xu, and K. Yang, Robust block preconditioners for poroelasticity. Comput. Methods Appl. Mech. Engrg., 369 (2020) e113229.
chow98 E. Chow, and M. A. Heroux, An object-oriented framework for block preconditioning. ACM Trans. Math. Softw., 24(2) (1998) 159–183.
cyr16 E.C Cyr, J. N. Shadid, and R. S. Tuminaro, Teko: A Block Preconditioning Capability with Concrete Example Applications in Navier–Stokes and MHD. SIAM J. Sci. Comput., 38(5) (2016) S307–S331.
duan03 H.-Y. Duan and G.-P. Liang, On the
velocity-pressure-vorticity least-squares mixed finite element
method for the 3D Stokes equations. SIAM J. Numer. Anal., 41(6)
(2003) 2114–2130.
dubois03 F. Dubois, M. Salaün, and S. Salmon, First vorticity-velocity-pressure numerical scheme for the Stokes
problem. Comput. Methods Appl. Mech. Engrg., 192(44–46) (2003)
4877–4907.
efendiev09 Y. Efendiev, J. Galvis, and Y. Vassilevski, Preconditioning of coupled systems and applications.
Numer. Linear Alg. Appl., 16(7–8) (2009) 899–924.
ern04 A. Ern and J.-L. Guermond,
Finite Elements I: Approximation and Interpolation.
Vol. 72 of Texts in Applied Mathematics. Springer, Cham (2021).
gatica14 G.N. Gatica, A Simple Introduction to the Mixed Finite Element Method. Theory and Applications. SpringerBriefs in Mathematics. Springer, Cham, 2014.
giraud11 L. Giraud, C. Geuzaine, and J. Dominguez, An efficient preconditioner for elasto-poroelasticity based on the pressure-correction method. Int. J. Numer. Methods Engrg., 89(10) (2011) 1139–1164.
girault V. Girault and P.A. Raviart,
Finite Element Methods for Navier–Stokes
Equations. Theory and Algorithms. Springer, Berlin (1986).
hong22 Q. Hong, J. Kraus, M. Kuchta, M. Lymbery, K.-A. Mardal and M. E. Rognes, Robust approximation of generalized Biot-Brinkman problems, J. Sci. Comput., 93 (2022) e77.
hong19
Q. Hong, J. Kraus, M. Lymbery, and F. Philo, Conservative
discretizations and parameter-robust preconditioners for Biot and
multiple-network flux-based poroelasticity models, Numer. Linear Algebra
Appl., 26 (2019) e2242.
jha15 S.K. Jha, Y Efendiev, J. Galvis, and Y. Vassilevski, Block-diagonal preconditioning for coupled systems in subsurface flow simulation. J. Comput. Phys., 299 (2015) 203–224.
ju20 G. Ju, M. Cai, J. Li, and J. Tian, Parameter-robust multiphysics algorithms for Biot model with application in brain edema simulation. Math. Comput. Simul., 177 (2020) 385–403.
kirby10 R.C. Kirby, From functional analysis to iterative methods. SIAM Rev., 52(2) (2010) 269–293.
kirby18 R.C. Kirby and L. Mitchell,
Solver Composition Across the PDE/Linear Algebra Barrier. SIAM J. Sci. Comput. 40(1) (2018) C76–C98.
kumar20
S. Kumar, R. Oyarzúa, R. Ruiz-Baier, and R. Sandilya, Conservative
discontinuous finite volume and mixed schemes for a new four-field
formulation in poroelasticity, ESAIM: Math. Model. Numer.
Anal., 54 (2020) 273–299.
lee17
J. Lee, K.-A. Mardal, and R. Winther, Parameter-robust
discretization and preconditioning of Biot's consolidation model. SIAM
J. Sci. Comput., 39 (2017) A1–A24.
lee19
J. J. Lee, E. Piersanti, K.-A. Mardal, and M. E. Rognes, A mixed
finite element method for nearly incompressible multiple-network
poroelasticity. SIAM J. Sci. Comput. 41 (2019) A722–A747.
lenarda17 P. Lenarda, M. Paggi, and R. Ruiz-Baier,
Partitioned coupling of advection-diffusion-reaction systems and Brinkman flows.
J. Comput. Phys., 344 (2017) 281–302.
liu07 H. Liu, P.R. Patil, and U. Narusawa, On Darcy-Brinkman equation: Viscous flow between two parallel plates packed with regular square arrays of cylinders. Entropy, 9 (2007) 118–131.
liu20 X. Liu, Y. Zhang, and Z. Chen, A block-diagonal preconditioner for the solution of coupled PDEs in poromechanics. Numer. Methods PDEs, 36(4) (2020) 1407–1425.
mardal11
K.-A. Mardal and R. Winther, Preconditioning discretizations of
systems of partial differential equations, Numer. Linear Alg. Appl., 18 (2011) 1–40.
mikelic15 A. Mikelic, M.F. Wheeler, and T. Wick, Phase-field modeling of a fluid-driven fracture in a
poroelastic medium. Comput. Geosci., 19(6) (2015) 1171–1195.
monk03 P. Monk, Finite Element Methods for Maxwell's Equations, Oxford University Press, New York, 2003.
orban20
D. Orban, and A. S. Siqueira,
LinearOperators.jl,
DOI: <https://doi.org/10.5281/zenodo.2559294> (2020)
oyarzua16
R. Oyarzúa and R. Ruiz-Baier, Locking-free finite element methods
for poroelasticity. SIAM J. Numer. Anal., 54 (2016) 2951–2973.
piersanti20
E. Piersanti, J.J. Lee, T. Thompson, K.-A. Mardal, and M.E. Rognes,
Parameter robust preconditioning by congruence for multiple-network
poroelasticity. SIAM J. S. Comput., 43 (2021)
B984–B1007.
qi20 W. Qi, P. Seshaiyer, and J. Wang, Finite element method with the total stress variable for Biot's consolidation model. Numer. Methods PDEs, 37(3) (2021) 2409–2428.
quarteroni A. Quarteroni, Numerical Models for Differential Problems. Springer-Verlag Milano (2009).
rajagopal07 K.R. Rajagopal, On a hierarchy of approximate models for flows of incompressible fluids through porous solids. Math. Model. Methods Appl. Sci., 17 (2007) 215–252.
rohan19 E. Rohan, J. Turjanicová, and V. Lukeš, The Biot–Darcy–Brinkman model of flow in deformable double porous media; homogenization and numerical modelling. Comput. Math. Appl., 78 (2019) 3044–3066.
ruiz22
R. Ruiz-Baier, M. Taffetani, H.D. Westermeyer, and I. Yotov, The
Biot–Stokes coupling using total pressure: formulation, analysis and
application to interfacial flow in the eye. Comput. Methods Appl.
Mech. Engrg., 389 (2022) e114384.
salaun15 M. Salaün, and S. Salmon, Low-order finite element method for the well-posed bidimensional Stokes
problem. IMA J. Numer. Anal., 35 (2015) 427–453.
verma22
N. Verma, B. Gómez-Vargas, L. M. De Oliveira Vilaca, S. Kumar, and
R. Ruiz-Baier, Well-posedness and discrete analysis for
advection-diffusion-reaction in poroelastic media, Appl. Anal., 101(14) (2022) 4914–4941.
zhang20 J. Zhang, C. Zhou, Y. Cao, and A.J. Meir, A locking free numerical approximation for quasilinear poroelasticity problems. Comput. Math. Appl., 80(16) (2020) 1538–1554.
|
http://arxiv.org/abs/2307.00965v1
|
20230703123503
|
OpenClinicalAI: An Open and Dynamic Model for Alzheimer's Disease Diagnosis
|
[
"Yunyou Huang",
"Xiaoshuang Liang",
"Xiangjiang Lu",
"Xiuxia Miao",
"Jiyue Xie",
"Wenjing Liu",
"Fan Zhang",
"Guoxin Kang",
"Li Ma",
"Suqin Tang",
"Zhifei Zhang",
"Jianfeng Zhan"
] |
cs.LG
|
[
"cs.LG",
"cs.AI"
] |
UTF8gbsn
label1,label2]Yunyou Huang
label1,label2]Xiaoshuang Liang
label1,label2]Xiangjiang Lu
label1,label2]Xiuxia Miao
label1,label2]Jiyue Xie
label1,label2]Wenjing Liu
label4]Fan Zhang
label4]Guoxin Kang
label3]Li Ma
label1,label2]Suqin Tang
label6]Zhifei Zhangcor1
[email protected]
label4,label5]Jianfeng Zhancor1
[email protected]
[cor1]Corresponding author:
[label1]organization=Key Lab of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University,
city=Guilin,
country=China
[label2]organization=Guangxi Key Lab of Multi-Source Information Mining and Security, Guangxi Normal University,
city=Guilin,
country=China
[label4]organization=University of Chinese Academy of Sciences,
country=China
[label3]organization=Guilin Medical University,
city=Guilin,
country=China
[label6]organization=Department of Physiology and Pathophysiology, Capital Medical University,
country=China
[label5]organization=International Open Benchmark Council
Although Alzheimer's disease (AD) cannot be reversed or cured, timely diagnosis can significantly reduce the burden of treatment and care. Current research on AD diagnosis models usually regards the diagnosis task as a typical classification task with two primary assumptions: 1) All target categories are known a priori; 2) The diagnostic strategy for each patient is consistent, that is, the number and type of model input data for each patient are the same. However, real-world clinical settings are open, with complexity and uncertainty in terms of both subjects and the resources of the medical institutions. This means that diagnostic models may encounter unseen disease categories and need to dynamically develop diagnostic strategies based on the subject's specific circumstances and available medical resources. Thus, the AD diagnosis task is tangled and coupled with the diagnosis strategy formulation. To promote the application of diagnostic systems in real-world clinical settings, we propose OpenClinicalAI for direct AD diagnosis in complex and uncertain clinical settings. This is the first powerful end-to-end model to dynamically formulate diagnostic strategies and provide diagnostic results based on the subject's conditions and available medical resources. OpenClinicalAI combines reciprocally coupled deep multiaction reinforcement learning (DMARL) for diagnostic strategy formulation and multicenter meta-learning (MCML) for open-set recognition. The experimental results show that OpenClinicalAI achieves better performance and fewer clinical examinations than the state-of-the-art model. Our method provides an opportunity to embed the AD diagnostic system into the current health care system to cooperate with clinicians to improve current health care.
Real-world clinical setting Alzheimer's disease diagnose AI deep learning
§ INTRODUCTION
Alzheimer's disease is an incurable disease that heavily burdens our society (the total cost of caring for individuals with AD or other dementias is estimated at $321 billion) <cit.>. Early and accurate diagnosis of AD is crucial for effectively managing the disease and has the potential to save nearly $7 trillion in medical and care costs. <cit.>. However, it is estimated that 28 million of the 36 million people with dementia worldwide have not received a timely and accurate diagnosis due to limited medical resources, availability of experts, etc. <cit.>. Artificial intelligence (AI), as one of the technologies with the most potential to improve medical services, is widely employed in AD diagnosis research <cit.>. Daniel et al. <cit.> utilized plasma metabolites as inputs for Extreme Gradient Boosting (XGBoost) and demonstrated their potential in diagnosing AD. Xin et al. <cit.> proposed an approximate rank pooling method to transform 3D nuclear magnetic resonance images (MRI) into 2D images, followed by the utilization of a 2D convolutional neural network (CNN) for AD classification. Qiu et al. <cit.> reported a multimodal diagnosis framework that used CNN to capture the critical features of MRI images and then used Categorical Boosting (CatBoost) to detect the presence of AD and cognitively normal (CN) from demographics, medical history, neuroimaging, neuropsychological testing, and functional assessments.
As Fig. <ref> (a) shows, the AD diagnosis models in previous works are designed for the closed clinical setting with the following primary assumptions: (1) all categories of subjects are known a priori (subject categories are aligned in training and test sets by inclusion and exclusion criteria); (2) the same diagnostic strategies are used to diagnose all subjects (Delete subjects with missing data of a certain type or fill in missing data for subjects in data preprocessing); and (3) all medical institutions are capable of executing the prescribed diagnostic strategies <cit.>. This makes the AD diagnosis task an independent typical classification task. However, as Fig. <ref> (b) shows, the real-world clinical setting is open with uncertainty and complexity: (1) The categories of subjects in real-world clinical settings are not all known in advance and may include unknown categories that do not appear during the development of AI models. This implies that the test set may contain subject categories not present in the training set. This transforms AD diagnosis into an open-set recognition problem instead of it being a conventional classification problem. (2) Each subject is unique, and there is no one-size-fits-all diagnosis strategy. This results in variations in the amount and types of data across different subjects. (3) The conditions of medical institutions vary and are not known in advance; e.g., positron emission tomography (PET) is available in some hospitals but most hospitals in underdeveloped areas are not equipped with PET. Thus, the AD diagnosis task is an open-set recognition task tangled and coupled with the formulation of diagnosis strategies.
Currently, to tackle the challenges in the real-world clinical setting, open-set recognition technology has emerged in various fields <cit.>. However, open-set recognition only considers unknown categories while overlooking the complexities of subjects and the constraints posed by medical resources. In the real-world clinical setting, this hinders the application of open-set recognition technology to AD diagnosis.
Therefore, the critical problem is how to simultaneously handle the uncertainty and complexity of subjects and medical resources in AD diagnosis within the real-world clinical setting.
In this paper, we explore the AD diagnosis task from a new perspective of both subjects and medical institutions and redefine AD diagnosis as an open, dynamic real-world clinical setting recognition problem. Specifically, clinicians first formulate a preliminary diagnosis strategy based on the individual's condition and available medical resources after enquiring about the basic information involving the subject. Second, the subjects undergo examinations in accordance with the diagnostic strategy.
Third, the model merges all available information, categorizes subjects into prespecified or unknown categories, or adjusts the diagnostic strategy and returns to the second step. To
realize this approach,
as illustrated in Fig. <ref>, we propose OpenClinicalAI, an open and dynamic deep learning model to directly diagnose subjects in complex and uncertain real-world clinical settings. OpenClinicalAI comprises two tangled and coupled modules: deep multiaction reinforcement learning (DMARL) and multicenter meta-learning (MCML). MCML utilizes AutoEncoder and meta-learning techniques to diagnose subjects based on the subject's data obtained from the diagnostic strategy formulated by DMARL. It serves as an environmental simulator, providing feedback to DMARL.
DMARL is a multitask learning model that functions as an agent which dynamically adjusts the subject's diagnosis strategy and
acquires fresh
examination data based on rewards from MCML and the subject's current examination data.
To summarize, our contributions are as follows:
∙ OpenClinicalAI is the first end-to-end coupling model designed specifically for direct AD diagnosis in complex and uncertain real-world clinical settings. It dynamically develops 35 different diagnosis strategies according to different subject situations and 40 different examination abilities of medical institutions on the test set. The framework of OpenClinicalAI is domain-independent and can be extended to other diseases to promote the development of real-world clinical setting diagnosis.
∙ The newly designed DMARL enables OpenClinicalAI to dynamically adjust the diagnosis strategy according to the subject's current data, and MCML provides feedback on classification information and rewards. Its novelty lies in designing a multitask model for selecting the next clinical examinations for a subject and avoiding a step-by-step form of reinforcement learning.
∙ The novel MCML promotes open-set recognition of AD based on the diagnosis strategy from DMARL and provides disease classification information and rewards for DMARL. It uses AutoEncoders to retain more general features for open set recognition and uses clustering to divide subjects of the same category into multiple subcategories. This improves the accuracy of meta-learning algorithms in calculating subject similarity, thereby providing unknown subject recognition. In addition, it is also used as an environmental simulator to evaluate the rewards of the diagnostic strategies formulated by DMARL, and the disease classification information is used as the input of DMARL to help dynamically formulate diagnostic strategies.
∙ In the closed clinical setting, OpenClinicalAI shows a comparable performance to the current state-of-the-art model in terms of the AUC (area under the receiver operating characteristic curve) metric. At the same time, less than 10% of subjects are required to have an MRI image. In the real-world clinical setting, our model outperforms the current state-of-the-art model by: (1) an absolute increase of 11.02% and 11.48% (AD diagnosis and cognitively normal diagnosis) in AUC and 30.09% and 47.94% in sensitivity; and (2) an absolute reduction of 68.93% MRI for subjects. In addition, it has 93.96% in sensitivity for identifying unknown categories of subjects.
§ RELATED WORK
AD diagnosis model. With progress in machine learning and deep learning, AD diagnostic models have gained significant attention in the medical community. AD diagnosis studies can be divided into nonimage-based, image-based, and multimodal-based studies. Aljovic et al. <cit.> designed a linear feed-forward (FF) neural network that used biomarkers (albumin ratio, AP40, AP42, tau-total, and tau-phospho) as input to classify Alzheimer's disease. This work proved that it is possible to use biomarker data for AD discrimination. Lu et al. <cit.> trained a 3-dimensional sex classifier Inception-ResNet-V2 as a base model in transfer learning for AD diagnosis. They then made it suitable for 3-dimensional MRI inputs and achieved high accuracy in a large collection of brain MRI samples (85,721 scans from 50,876 participants). Qiu et al. <cit.> proposed an interpretable multimodal deep learning framework that used a fully convolutional network (FCN) to perform MRI image feature extraction and then used a traditional multilayer perceptron (MLP) with multimodal inputs (image and text features) to classify the disease and generate disease probability maps. Many studies have shown that the performance of multimodal models is better than that of single-modal models because different modal information can complement each other to help diagnose AD.
Open-set recognition.
Various open-set recognition (OSR) techniques have been proposed to solve the problem of unknown categories in the real world. They can be divided into discriminant and generative models <cit.>. Scheirer et al. <cit.> proposed a preliminary solution, a 1-vs.-Set machine, which sculpts a decision space from the marginal distances of a 1-class or binary SVM with a linear kernel. Bendale et al. <cit.> introduced a model called layer-OpenMax, which estimates the probability of an input being from an unknown class to adapt deep networks for open-set recognition. Oza et al. <cit.> proposed a deep neural network-based model—C2AE, which uses class-conditioned autoencoders with novel training and testing methodologies. Yang et al. <cit.> proposed a model based on a generative adversarial network (GAN) to address open-set human activity recognition without manual intervention during the training process. Geng et al. <cit.> proposed a collective decision-based OSR framework (CD-OSR) by slightly modifying the hierarchical Dirichlet process (HDP). This method aimed to extend existing OSR for new class discovery while considering correlations among the testing instances. Perera et al. <cit.> utilized self-supervision and augmented the input image to learn richer features to improve separation between classes. These methods tend to use powerful generative models to mimic novel patterns.
Deep reinforcement learning.
Deep learning is already widely applied in the medical field for prognosis prediction and treatment recommendation. Saboo et al. <cit.> simulated the progression of Alzheimer's disease (AD) by integrating differential equations (DEs) and reinforcement learning (RL) with domain knowledge. DEs serve as an emulator, providing the relationships between some (but not all) factors associated with AD. The trained RL model acts as a proxy to extract the missing relationship by optimizing the objective function over time. The model uses baseline (0 year) characteristics from aggregated and real data to personalize a patient's 10-year AD progression. Quan et al. <cit.> proposed an interpretable deep reinforcement learning model to reconstruct compressed sensing MRI images. Komorowski et al. <cit.> developed an artificial intelligence (AI) clinician model using a reinforcement learning agent. This model extracts implicit knowledge from patient data based on the lifetime experience of human clinicians and analyzes numerous treatment decisions, including mostly suboptimal decisions, to learn optimal treatment strategies. Experiments have shown that, on average, the value of treatments selected by the AI clinician model is reliably higher than that of human clinicians.
Deep reinforcement learning (DRL) is a combination of deep learning and reinforcement learning algorithms that learn and optimize decision-making strategies through interactions with the environment. However, current research often treats disease diagnosis as a one-step task, where the model inputs patient information and obtains a classification. This approach has led to a neglect of the potential application of DRL in diagnostic tasks.
§ PROBLEM FORMULATION
In this work, the diagnosis of AD involves dynamically developing diagnostic strategies based on the subject's situation and the medical institution, ultimately determining whether the subject belongs to the unknown category. To address the dynamic interaction process between subjects and medical institutions, we propose a reinforcement learning model, namely, OpenClinicalAI, to solve this problem. The detailed definition of the AD diagnosis problem is as follows:
Agent: The agent is capable of perceiving the state of the external environment and the rewards and uses this information to learn and make decisions. In this problem, we employ the DMARL model as the agent, which takes observation of the environment as input and formulates recommended next clinical examinations based on the subject's conditions and the medical institution.Environment: The environment refers to all objects external to the intelligent agent, whose observation is influenced by the actions of the agent and provides corresponding rewards to the agent. In our problem, the environment encompasses three components: the MCML model, subjects, and medical institutions. Medical institutions conduct clinical examinations on subjects according to the action suggested by the agent. The data generated by these clinical examinations serve as input for the MCML model, which generates intermediate diagnostic results. The environment provides feedback to the intelligent agent using the diagnostic intermediate results and the latest clinical data while simultaneously calculating the corresponding reward.Observation: The agent's observation of the environment include the intermediate diagnostic results from MCML and the currently available clinical data of the subjects.Action: The agent interacts with the environment via actions where the actions include 12 types of clinical examinations. An action may consist of one or more clinical examinations.Reward: The reward is the bonus that an agent gets once it takes an action in the environment at the given time step t. In this paper, the reward is measured as the degree of improvement in the probability of intermediate diagnostic results obtained by the MCML model after taking an action.Discount factor (γ): The discount factor measures the importance of future rewards to the agent in the current state. In this paper, γ represents the set of hyperparameters of the OpenClinicalAI model. During the gradient descent process, the model will adjust these hyperparameters to obtain the maximum expected reward.
We use E ={e_1, e_2, ⋯, e_13} to represent 13 clinical examinations performed for Alzheimer's disease, D ={d_1, d_2, ⋯, d_13} to represent the data obtained by a subject undergoing the corresponding clinical examination, and T ={t_0, t_1, ⋯, t_l} to represent the time step series of the model, where l is determined by the model according to the conditions of the subject and the medical institution. Using S={s_i}_i=1^n to represent the data for n subjects contained in the dataset, where s_t_l^(i) = {d_k}^m represents the m clinical examination data that subject_i has up to the t_l time step with k ∈ [1, 13], m ∈ [1, 13], len({d_k}^m)=m, {d_k} ^m⊆ D. Using s_t_0^(i) represents the original data of subject_i. a_t_l={e_j}^u indicates the next clinical examination recommended by the model (which contains u clinical examinations) at the t_l time step, where j ∈ [2, 13], u ∈ [1, 12], len({e_j}^u)=u, {e_j} ^u⊆ E. da_t_l={d_j}^u is used to represent the clinical examination data obtained by the subject after executing a_t_l. Then, update the subject_i data s_t_l + 1^(i) = s_t_l^(i)∪
da_t_l^(i).
In particular, we call the set of actions selected by the model in t_1 to t_l time steps a diagnostic strategy ds={a_t_1∪ a_t_2∪⋯∪ a_t_l}, and the set of diagnostic strategies is represented by DS={ds_q | ds_q = {e_j}^h, h ≤ m}, |DS|=Q, where Q represents the number of diagnostic strategies generated based on the combination of m clinical examinations contained by the subject data, which is determined by the model based on the subject and the medical institution. Therefore, we use s^(i)_ds_q to represent the data obtained by subject_i after executing the diagnostic strategy ds_q. In addition, s^(i)_ds_q as an input of the MCML model will obtain the corresponding diagnosis y^(i)_pred_ds_q, and s_t_l^(i) as an input of the MCML model will obtain the corresponding intermediate diagnosis y^(i)_pred_s_t_l. Thus, the observation of the lth time step is {s_t_l^(i), y^(i)_pred_s_t_l}, and the reward after executing action a^(i)_t_l is r_t_l^(i).
Based on the aforementioned definitions, Trajectory (τ) is a sequence of states, actions, and rewards that are generated by the model's interaction with subject_i over a time step of duration l:τ = {((s_t_1^(i), y_pred_t_1^(i)), a_t_1^(i),r_t_1^(i)),((s_t_2^(i), y_pred_t_2^(i)), a_t_2^(i),r_t_2^(i)), ⋯,((s_t_l^(i),
y_pred_t_l^(i)), a_t_l^(i),r_t_l^(i))}
§ METHODOLOGY
§.§ Data preparation
For each subject, if there is more than one visit, each visit is considered to be an independent sample. Multiple categories of data are generated at each visit based on the diagnostic strategy. This results in each sample usually containing multiple types of data. As for medical images, we first convert the data from DICOM format to NIFTI format using the dcm2nii library. Then, we perform image registration using the ANTs library <cit.>. Next, we convert the 3D image into 2D slices and transform the image from grayscale to RGB. Finally, we utilize a pretrained model called DenseNet201 to extract features from the 2D slices <cit.>. For genetic data, we extract 70 single nucleotide polymorphisms (SNPs) that are highly related to AD and represent each SNP using a one-hot encoding method <cit.>. To accommodate the varying dimensions of each data category, the number of data categories included in each visit, and the number of past visits for each subject, we use a unified data representation framework, as illustrated in Supplementary Figure. S1. In this framework, we use an array with a shape of 1×2090 to represent the examination category in the subject's visit. The shape of our data is n×2090, where n represents the number of data categories for the subject.
§.§ OpenclinicalAI
As shown in Fig. <ref>, OpenclinicalAI is based on the status of subjects and medical institutions to directly formulate diagnostic strategies and provide accurate diagnostic results. This approach enables the system to effectively handle the inherent uncertainty and complexity of real-world clinical settings. This model is composed of two coupled elements: (1) DMARL (refer to Section <ref>) combines encoder, multitask learning, and deep reinforcement learning to dynamically adapt the subject's diagnostic strategy based on the specific circumstances of the hospital, subject, and feedback from MCML. (2) MCML (refer to Section <ref>) takes the data generated by DMARL's predictions as input, employs AutoEncoder to extract more general features, calculates more precise sample-to-class center distances, and integrates meta-learning techniques to facilitate AD identification in open clinical environments. In summary, the objective of OpenclinicalAI can be formulated as follows:
L_r(W)=∑_i=1^n[α l1(ŷ_̂î, y_i) +β l2(W)+λ l3(X_i,W)
+μ l4(X_i,X̂_̂î)]
where ŷ_̂î and y_i indicate the predicted class probability and true label of the subject, respectively. Similarly, X_i and X̂_̂î denote the subject's original input features and reconstructed features, respectively. α, β, λ, and μ correspond to the hyperparameters of each loss, while W signifies the weight of the network. Specifically, l1 is the softmax cross-entropy function, l2 represents the L2 regularized loss function, l3 utilizes uncertainty to weigh losses in a multitask learning scenario, and l4 captures the reconstruction loss of X_i.
§.§.§ MCML for AD recognition and environment simulation
The architecture of MCML is inspired by Generative Discriminative Feature Representations <cit.> and meta-learning-based OpenMax <cit.>. As depicted in Fig. <ref>, MCML combines the advantages of the previous two works. In addition, (1) MCML further improves the retention of generalized features by fostering information interaction between each layer of the decoder and classifier. This mitigates the risk of the classifier solely focusing on the most relevant features for classification. (2) MCML utilizes k-means clustering to partition subjects of the same category into multiple subtypes. It then calculates the distance between subjects and the center of the nearest subtype. This approach reduces the likelihood of atypical subjects being misclassified as unknown categories due to their substantial distance from the category center. Specifically, MCML comprises three components: Encoder_AD, Decoder, and Classifier. These components classify subjects into AD, CN, and unknown categories based on specific diagnostic strategies. The Decoder is structured as a mirror image of the Encoder_AD to enable the model to capture more subject-specific characteristics associated with AD and CN classification. The Classifier consists of three dense layers and a SoftMax/OpenMax layer which facilitates subject classification in the open clinical setting. In summary, the loss function for the MCML module is formulated as follows:
{
l1(ŷ_̂î, y_i) = -[y_ilog(p_i)+(1-y_i)log(1-p_i)]
l4(X_i,X̂_̂î) = ||X_i-X̂_̂î||_2^2
ŷ_̂î=f(W(X_i))
ŷ_̂î_̂ûn̂=1-∑_j=1^2ŷ_̂î[j]
.
where p_i denotes the probability assigned to the subject's cognitive state prediction, and the cross entropy loss l1 is employed to quantify the disparity between the subject's current cognitive state, determined after an examination, and the cognitive state predicted by the MCML model. The score modifier f is based on EVT <cit.>, and ŷ_̂î_̂ûn̂ represents the probability that the sample belongs to an unknown category.
§.§.§ DMARL for diagnostic strategy development
As depicted in Fig. <ref>, the DMARL module consists of an encoder, Encoder_DS, and a predictor, Predictor. As an agent, DMARL takes the observation of the subject and the feedback of MCML as inputs. To effectively incorporate variable-length clinical data based on different diagnostic strategies, we employ a bidirectional long short-term memory (B-LSTM) network in the encoding pathway. Encoder_DS comprises three layers of B-LSTM units. The primary objective of DMARL is to utilize existing medical resources, subject observations, and the classification confidence from MCML to predict the subsequent clinical examinations for each subject and dynamically formulate an optimal diagnostic strategy. To integrate the output of Encoder_DS with the classification confidence from MCML, Predictor consists of 13 dense layers. Furthermore, to determine the appropriate clinical examination for each subject, the model incorporates 12 independent sigmoid classifiers. In addition, due to the lack of labels for the next clinical examination, DMARL will be trained according to the rewards of MCML. Thus, the loss function for the DMARL module is expressed as follows:
l3(X_i,W) = ∑_i=1^13[1/2δ_i^2l1(W)+logδ_i]
The sum of l3 losses is calculated by evaluating 13 recommended subtasks (e) for examination, where δ_i is an observation noise scalar of the output of ith examination <cit.>.
§.§ Reward of the diagnosis strategy
Although there have been significant efforts to improve the interpretability and internal logic of deep learning, understanding the behavior of deep learning models remains challenging <cit.>. We do not know whether the diagnosis strategy of the AI model needs to be consistent with that of an human expert
. Therefore, in this work, we train the DMARL module based on the reward generated by MCML. The ultimate goal of OpenclinicalAI is to accurately identify AD patients using MCML. The subsequent examination for each subject is determined by whether it leads to a higher predicted probability for the correct category and lower predicted probabilities for other categories. The reward of MCML is calculated using Algorithm <ref>.
algorithmAlgorithm
§.§ Model training
The training process is divided into two stages. In the first stage, the procedure is as follows: (1) For each subject_i, the diagnostic strategy set DS is generated according to m examinations contained in the data s^(i)={d_k}^m. It should be noted that the types and number of clinical examinations are not the same between subjects since the diagnosis strategy will change dynamically according to different subjects and available medical resources. (2) Every s^(i)_ds_q⊆ S^(i) is considered to be an independent sample and forms the dataset D_diagnosis={<s^(i)_ds_q,Y^(i)>}, where Y^(i) represents the true diagnostic category label of subject_i. (3) The MCML model is trained and tested on D_diagnosis, and the intermediate diagnosis result Pred={y_pred_ds_q, ds_q∈ DS} is obtained. In the second stage, the process is as follows: (1) For every (s^(i), y_pred^(i)), we obtain the reward r^(i) and a^(i) by Algorithm <ref>, and form the dataset D_examination={((s_t_1^(i), y_pred_t_1^(i)), a_t_1^(i),r_t_1^(i)),((s_t_2^(i), y_pred_t_2^(i)), a_t_2^(i),r_t_2^(i)), ⋯, ((
s_t_l^(i), y_pred_t_l^(i)), a_t_lL^(i),r_t_l^(i))}. (2) We then train and test the DMARL on D_examination.
§.§ Prediction
To deliver the final diagnosis result, it is imperative to dynamically adjust the diagnosis strategy. The prediction process is delineated in Algorithm <ref>.
§ RESULTS
§.§ Human subjects
Data used in the preparation of this article were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset (<http://adni.loni.usc.edu>). We included 2,127 subjects from ADNI 1, ADNI GO, ADNI 2, and ADNI 3 for 9,593 visits based on data availability. Detailed information can be found in Section S1. The characteristics of the subjects are shown in Table <ref>. All the models will be verified in both a closed clinical setting and a real-world clinical setting. The configurations of the closed clinical setting and real-world clinical setting are as follows:
§.§.§ Closed world setting
In the closed world setting, 85% of AD and CN subjects were used for the training set, 5% of AD and CN subjects were used for the validation set, and 20% of AD and CN subjects were used for the test set.
§.§.§ Real world setting
A total of 2127 subjects with 9,593 visits were included in our work. A subject during a visit may require different categories of examination. Every combination of those examinations represents a diagnostic strategy. Thus, for the data of subject, 443,795 strategies were generated. These AD and CN subjects were randomly assigned to the training, validation, and test sets. The training set contained 1,025 subjects with 3,986 visits and generated 180,682 strategies. In the training set, 587 subjects with 1,781 visits were AD and developed 80,022 strategies, and 466 subjects with 2,205 visits were CN and generated 100,660 strategies. The validation set contained 73 subjects with 254 visits and generated 11,898 strategies. In the validation set, 44 subjects with 127 visits were AD and develop 6,008 strategies, and 31 subjects with 127 visits were CN and generated 5,890 strategies. The test set contained 1,460 subjects with 5,353 visits. In the test set, 109 subjects with 305 visits were AD, 92 subjects with 411 visits were CN, 1,082 subjects with 4,357 visits were MCI (mild cognitive impairment), and 280 subjects with 280 visits were SMC (significant memory concern). Notably, the dataset contains multiple visits for a subject's progression from CN to AD.We note that the lack of examination data was able to simulate the lack of the executive ability of the examination by the medical institution. It is logically equivalent to missed examinations in medical institutions. All subjects with labels containing at least one of the above categories of information were considered in this study.
§.§ Comparison methods
We validated the effectiveness of OpenclinicalAI by comparing its performance with recent work: image-based models (DSA-3D-CNN (2016) <cit.>, VoxCNN-ResNet (2017) <cit.>, CNN-LRP (2019) <cit.>, Dynamic-image-VGG (2020) <cit.>, Ncommon-MRI (2022) <cit.>, DenseNet-XGBoost) and multimodal input models (FCN-MLP (2020) <cit.>). In addition, the transfer learning-based model DenseNet-XGBoost was the previous state-of-the-art baseline model, since among the recent AI diagnosis studies, the transfer learning framework of the pretrained model achieved state-of-the-art performance in many diagnosis tasks based on medical images <cit.>.
§.§ Experimental setup
The model was optimized using mini-batch stochastic gradient descent with Adam and a base learning rate of 0.0005 <cit.>. All comparison models were constructed and trained according to the needs of AD diagnosis tasks and their official codes under the same settings. The experiments were conducted on a Linux server equipped with Tesla P40 and Tesla P100 GPUs.
§.§ The performance in the closed clinical setting
As shown in Table <ref> and Fig. <ref> (a), in general, the performance of multimodal model (such as AUC score of FCN-MLP achieving 97.28% [95% CI 96.71%-97.81%], accuracy score achieving 90.72% [95% CI 89.56%-91.84%]) is better than that of single-modal model (such as AUC score of CNN-LRP achieving 93.21% [95% CI 92.11%-94.16%], accuracy score achieving 82.36% [95% CI 80.72%-83.76%]). The OpenClinicalAI achieves the state-of-the-art performance with the AUC score of 99.50% [95% CI 99.11%-99.80%] and the accuracy score of 96.52% [95% CI 95.76%-97.20%] in the closed setting. Considering the confidence interval of the model, OpenClincialAI does not show significant improvement compared with other multimodal-based models. This indicates that the task of diagnosing AD in a closed clinical setting is not very challenging, and there is not much room for improvement <cit.>.
The essential improvement from the state-of-the-art model to OpenClinicalAI is that the latter can dynamically develop personalized diagnosis strategies according to specific subjects and medical institutions. As shown in Table <ref> and Fig. <ref> (b), less than 10% of the subjects require a nuclear magnetic resonance scan, and most of the subjects only require less demanding examination, such as cognitive examination. We conclude that OpenClinicalAI can avoid unnecessary examinations for subjects and is suitable for medical institutions with varying examination facilities [Different hospitals have various clinical settings, such as community hospitals without nuclear magnetic resonance machines and large hospitals with multiple facilities]. Of the 716 samples in the test set, only 594 samples had MRI data, and the remaining 122 samples were simply discarded because they could not be used as inputs to the corresponding MRI model.
§.§ The performance of AD diagnosis in the real-world clinical setting
To apply the model to the real-world clinical setting, the model trained in a closed clinical setting usually needs to set a threshold or add OpenMax or use the generated pattern to identify samples of unknown categories <cit.>. As shown in Table <ref> and Fig. <ref>, in general, the performance of all models has declined significantly (AUC score decline for AD was in the range of [4.7%-14.6%], accompanied by a very low sensitivity to unknown), but the multimodal model is still better than the single-modal model. OpenClinicalAI achieves state-of-the-art performance (AUC score to AD reaching 95.02% [95% CI 93.04%-96.62%], sensitivity to unknown reaching 93.96% [95% CI 92.90%-94.92%]). In addition, except for our models, all models have high recognition accuracy for samples of known categories and significantly low recognition accuracy for samples of unknown categories (for example, the sensitivity of FCN-MLP-Thr to AD is 91.93% [95% CI 86.79%-96.15%], the sensitivity to CN is 90.00% [95% CI 85.11%-94.08%], and the sensitivity to unknown was 14.64% [95% CI 13.22%-16.09%]), or vice versa (the sensitivity of DenseNet-XGBoost-Thr to unknown is 88.88% [95% CI 87.53%-90.18%]).
Compared to the state-of-the-art model, OpenClinicalAI demonstrates a significant improvement in the AUC of identification of AD subjects (+2.47%) and the AUC of identification of CN subjects (+11.48%). It is worth noting that, as shown in Fig. <ref> (a), OpenClinicalAI has a substantial improvement in the sensitivity of AD, CN, and unknown operating points (the sensitivity of OpenClinicalAI to unknown is 93.96% [95% CI 92.90%-94.92%]).
For a state-of-the-art model, such as DenseNet-XGBoost, the sensitivity of known (AD and CN) subjects is low, the sensitivity of AD is just 54.83%, and the sensitivity of CN is just 33.33%. This indicates that most known subjects will be marked as unknown and sent to a clinician for diagnosis. Moreover, the sensitivity of unknown subjects is 88.88%, meaning 11.12% of unknown subjects will be misdiagnosed. In addition, the DenseNet-XGBoost model requires that every subject has a nuclear magnetic resonance scan, and hence every medical institution that deploys the baseline model must be equipped with a nuclear magnetic resonance apparatus.
For OpenClinicalAI-G, which does not have an OpenMax mechanism but instead uses a generative-discriminative mechanism, the sensitivity for known (AD and CN) subjects is as good as that of OpenClinicalAI with an OpenMax mechanism <cit.> (the sensitivity of OpenClinicalAI-G to AD is 85.54% [95% CI 83.30%-87.70%], and the sensitivity to CN is 82.65% [95% CI 80.05%-85.30%]). In contrast, the sensitivity of unknown subjects is much worse than that of OpenClinicalAI with an OpenMax mechanism (the sensitivity of OpenClinicalAI-G to unknown subjects is 34.33% [95% CI 32.21%-36.30%]). This means that most unknown subjects will be misdiagnosed, which is unacceptable in real-world settings. The generative model of open set recognition is not very helpful for AD diagnosis in a real-world clinical setting. In addition, as shown in Table <ref>, the combination of the OpenMax mechanism and traditional model cannot help AD diagnosis in a real-world clinical setting.
In contrast, OpenClinicalAI diagnoses most of the known (AD and CN) subjects correctly, marks most of the rest as unknown, and sends them to the clinician for further diagnosis. In addition, most unknown subjects are correctly identified, and the misdiagnosis of unknown subjects is only 6.04%. This means that OpenClinicalAI has significant potential application value for implementation in real-world settings. In addition, as shown in Fig. <ref> (i), similar to the behaviors of OpenClinicalAI in the closed setting, OpenClinicalAI can develop and adjust diagnosis strategies for every subject dynamically in the real-world setting. Only a small portion of subjects require a nuclear magnetic resonance scan and more costly (both in terms of economy and potential harm) examinations.
§.§ Development of diagnostic strategies in the real-world clinical setting
Unlike the current mainstream AD diagnostic models in which all subjects require a nuclear magnetic resonance scan, OpenClinicalAI develops personalized diagnostic strategies for each subject. For every subject, first, it will acquire the base information of the subject. Second, it will give a final diagnosis or receive other examination information according to the current data of the subject. Third, the previous step is repeated until the diagnosis is finalized or there is no further examination.
As shown in Fig. <ref> (a), the diagnosis strategies of subjects are not the same (as shown in Supplementary Table S2). Our model dynamically develops 35 diagnosis strategies according to different subject situations and all 40 examination abilities of medical institutions in the test set (as shown in Supplementary Table S3). For the known (AD and CN) subjects, as shown in Fig. <ref> (b) and (c), most of the subjects require low-cost examinations (such as cognition examination (CE)). A small portion of subjects require high-cost examinations (such as cerebral spinal fluid analysis (CSF)). For unknown subjects, as shown in Fig. <ref> (d), different from the diagnosis of known (AD and CN) subjects, identifying unknown subjects is more complex and more dependent on high-cost examinations. The reason is that according to the mechanism of OpenClinicalAI, it will do its best to distinguish whether the subject belongs to the known categories. When it fails, it will mark the subject as unknown. This means that the unknown subject will undergo more examinations. The details of the high-cost examination requirements are as follows:
(1) 33.94% of unknown subjects require a nuclear magnetic resonance scan (that of the known subject is 12.43%).
(2) 13.95% of unknown subjects require a positron emission computed tomography scan with 18-FDG (that of the known subject is 4.75%).
(3) 8.67% of unknown subjects require a positron emission computed tomography scan with AV45 (that of the known subject is 5.87%).
(4) 9.38% of unknown subjects require a gene analysis (that of the known subject is 1.96%).
(5) 5.13% of unknown subjects require a cerebral spinal fluid analysis (that of the known subject is 0.28%).
§.§ Potential clinical applications
OpenClinicalAI enables the AD diagnosis system to be implemented in uncertain and complex clinical settings thereby reducing the workload of AD diagnosis and minimizing the cost to subjects.
To identify the known (AD and CN) subjects with high confidence, the operating point of OpenClinicalAI runs with a high decision threshold (0.95). For the test set, OpenClinicalAI achieves an accuracy value of 92.47%, an AD sensitivity value of 84.92%, and a CN sensitivity value of 81.27% while retaining an unknown sensitivity value of 93.96%. In addition, it can cooperate with the senior clinician to identify the rest of the known subjects, which are not marked as any known kinds (AD or CN). In this work, 15.08% of AD subjects and 18.73% of CN subjects are marked as unknown and sent to senior clinicians for diagnosis. This is significant for undeveloped areas, since it is a promising way to connect developed and undeveloped areas to reduce workload, improve overall medical services, and promote medical equity.
To minimize the subject cost and maximize the subject benefit, our method dynamically develops personalized diagnosis strategies for the subject according to the subject's situation and existing medical conditions.
OpenClinicalAI judges whether it can finalize the subject's diagnosis according to the currently obtained information of subjects. If the current data are insufficient to establish a high confidence diagnosis, it will provide recommendations for the most appropriate next steps. This approach effectively tackles the issue of overtesting, resulting in reduced costs for subjects while maximizing the benefits they receive. For the test set, 35 different diagnosis strategies are applied to the subject by OpenClinicalAI (as shown in Table S2). The details of the high-cost examination are as follows:
(1) 31.07% of subjects require a nuclear magnetic resonance scan.
(2) 12.72% of subjects require a positron emission computed tomography scan with 18-FDG.
(3) 8.29% of subjects require a positron emission computed tomography scan with AV45.
(4) 8.39% of subjects require a gene analysis.
(5) 4.48% of subjects require a cerebral spinal fluid analysis.
For the medical institution, before the system recommends an examination for a subject, OpenClinicalAI will inquire whether the medical institution can execute the examination. Suppose the medical institution cannot perform the examination. In this case, OpenClinicalAI will recommend other examinations until the current information of the subject is enough to support it to make a diagnosis or until all common examinations have been suggested and the subject is marked as unknown. This enables OpenClinicalAI to be deployed in different medical institutions with varying examination capabilities. In this study, OpenClinicalAI conducted subject diagnoses for 40 examination conditions that could potentially be encountered in a health care facility (Table S3). Additionally, OpenClinicalAI made 14,654 adjustments to diagnostic strategies for subjects in the test set who lacked the necessary information to receive examination recommendations. This suggests that the medical facility may have been incapable of performing the recommended examinations.
§ CONCLUSION
After comparing the performance of state-of-the-art models for AD diagnosis in both closed clinical and real-world settings, we noticed that the models that performed exceptionally well in the closed clinical setting did not maintain the same level of effectiveness in the real-world setting. This suggests it is time to switch attention from algorithmic research in closed clinical setting settings to systematic study in real-world settings while focusing on the challenge of tackling the uncertainty and complexity of real-world settings. In this work, we have proposed a novel open, dynamic machine learning framework to allow the model to directly address uncertainty and complexity in the real-world setting. The resulting AD diagnostic system demonstrates great potential to be implemented in real-world settings with different medical environments to reduce the workload of AD diagnosis and minimize the cost to the subject.
Although many AI diagnostic systems have been proposed, how to embed these systems into the current health care system to improve medical services remains an open issue <cit.>.
OpenClinicalAI provides a reasonable way to embed the AI system into the current health care system. OpenClinicalAI can collaborate with clinicians to improve the clinical service quality, especially the clinical service quality of undeveloped areas. On the one hand, OpenClinicalAI can directly deal with the diagnosis task in an uncertain and complex real-world setting. On the other hand, OpenClinicalAI can diagnose typical patients of known subjects while sending those challenging or atypical patients of known subjects to clinicians for diagnosis. Although AI technology is different from traditional statistics, the model of the AI system still learns patterns from training data. The model can easily learn patterns from typical patients, but it can be challenging to learn patterns for atypical patients. Thus, every atypical and unknown patient is especially needed to be treated by clinicians. In this work, most of the known subjects are diagnosed by OpenClinicalAI, and the rest are marked as unknown and sent to the senior clinician.
Although OpenClinicalAI is promising for impacting future research on the diagnosis system, several limitations remain. First, prospective clinical studies of the diagnosis of Alzheimer's disease will be required to prove the effectiveness of our system. Second, the data collection and processing are required to follow the standards of the ADNI.
§ CREDIT AUTHORSHIP CONTRIBUTION STATEMENT
Yunyou Huang: Conceptualized, Methodology, Model design, Coding, Data curation, Writing the original draft. Xiaoshuang Liang: Writing review, Data curation. Suqin Tang: Writing review, Data curation. Li Ma: Writing review, Data curation. Fan Zhang: Data curation. Fan Zhang: Data curation. Xiuxia Miao: Data curation, Software. Xiangjiang Lu: Data curation, Software. Jiyue Xie: Data curation, Software. Zhifei Zhang and Jianfeng Zhan: Supervision, Conceptualization, Funding acquisition, Project administration, Writing review & editing.
§ DECLARATION OF COMPETING INTERESTS
The authors declare no competing financial interest.
§ DATA AVAILABILITY
The data from the Alzheimer's Disease Neuroimaging Initiative were used under license for the current study. Applications for access to the dataset can be made at <http://adni.loni.usc.edu/data-samples/access-data/>. All original code has been deposited at the website <https://www.benchcouncil.org/>BenchCouncil and is publicly available when this article is published.
§ ACKNOWLEDGMENT
We thank Weibo Pan and Fang Li for downloading the raw datasets from the Alzheimer's Disease Neuroimaging Initiative. This work is supported by the Standardization Research Project of Chinese Academy of Sciences (No. BZ201800001 to J.Z. ), the Project of Guangxi Science and Technology (No. GuiKeAD20297004 to Y.H. ), and the National Natural Science Foundation of China (No. 61967002 and No. U21A20474 to S.T. ).
elsarticle-num-names
|
http://arxiv.org/abs/2307.02383v1
|
20230705155050
|
Floating-base manipulation on zero-perturbation manifolds
|
[
"Brian A. Bittner",
"Jason Reid",
"Kevin C. Wolfe"
] |
cs.RO
|
[
"cs.RO"
] |
[
Luc Lapointe
August 1, 2023
==================
empty
empty
To achieve high-dexterity motion planning on floating-base systems, the base dynamics induced by arm motions must be treated carefully.
In general, it is a significant challenge to establish a fixed-base frame during tasking due to forces and torques on the base that arise directly from arm motions (e.g. arm drag in low Reynolds environments and arm momentum in high Reynolds environments).
While thrusters can in theory be used to regulate the vehicle pose, it is often insufficient to establish a stable pose for precise tasking, whether the cause be due to underactuation, modeling inaccuracy, suboptimal control parameters, or insufficient power.
We propose a solution that asks the thrusters to do less high bandwidth perturbation correction by planning arm motions that induce zero perturbation on the base.
We are able to cast our motion planner as a nonholonomic rapidly-exploring random tree (RRT) by representing the floating-base dynamics as pfaffian constraints on joint velocity. These constraints guide the manipulators to move on zero-perturbation manifolds (which inhabit a subspace of the tangent space of the internal configuration space).
To invoke this representation (termed a perturbation map) we assume the body velocity (perturbation) of the base to be a joint-defined linear mapping of joint velocity and describe situations where this assumption is realistic (including underwater, aerial, and orbital environments).
The core insight of this work is that when perturbation of the floating-base has affine structure with respect to joint velocity, it provides the system a class of kinematic reduction that permits the use of sample-based motion planners (specifically a nonholonomic RRT).
We show that this allows rapid, exploration-geared motion planning for high degree of freedom systems in obstacle rich environments, even on floating-base systems with nontrivial dynamics.
§ INTRODUCTION
Floating-base manipulation platforms offer the potential to perform critical tasking in underwater <cit.>, aerial <cit.>, and orbital <cit.> environments.
While manipulation has succeeded in high complexity environments <cit.>, floating-base systems have yet to achieve the same ability to precisely manipulate objects in similar environments.
The inability to physically anchor any link of the kinematic chain can induce complex dynamics on the ungrounded robot.
These dynamics present differential constraints that are often challenging for the motion planner to accommodate when pursuing precision tasking.
If the floating-base manipulator assumes a kinematic operating environment, perturbations on the base can cause imprecision in movement that is unpredictable and unsafe, especially in obstacle rich environments.
By considering these perturbation dynamics, one can pursue more precise positioning of the end effector(s).
Typically in floating-base manipulation, the dynamics are represented as a second-order mechanical system.
Such a representation stipulates that given a path in the joint configuration, the rate along that path has a variational effect on the evolution of the robot in its workspace.
These timing-based concerns induce substantial complexity on the motion planning problem; these planning problems are classified as kinodynamic.
Generally, these kinodynamic path planning problems are formulated as trajectory optimization problems.
These approaches typically approximate the Hamilton-Jacobi-Bellman equation, which recursively formulates the optimal control law for a physical system.
These exploitative methods excel at finding local refinements to seeded trajectories that leverage the agile capabilities of the system <cit.>, as long as the underlying model maintains high fidelity to the realized system and environment.
A separate trajectory optimization approach involves policy gradient methods <cit.>,
which are performant at rapidly finding collision free paths, but again are typically used to find local refinements to approaches (due to the exploitative nature of gradient descent).[While stochastic formulations are designed to specifically avoid local minima, we emphasize that the nature of a policy gradient search is far more localized than sampling-based path planning methods.]
Additionally, this class of solutions suffer from poor sampling scalability with the dimension of the configuration – termed the curse of dimensionality.
This notion of scalability is encountered in many practical scenarios, such as adding joints to improve the reachable configurations of a robot in key task domains (e.g. pick and place in a cluttered environment).
Exploitative trajectory optimization methods thus tend to be insufficient for accomplishing many critical tasks due to their locality and sample inefficiency.
However, as is typically the case in mobility architectures, a higher level planner might successfully hand off a performant, coarse plan intended for final refinement by the trajectory optimizer.
The main contribution of this work is the justification, design, and implementation of a framework that decouples perturbations during floating-base manipulation within a sampling-based motion planner (specifically an RRT).
We show that a kinematic reduction of the floating-base dynamics can be applied to systems through mechanical design, environment selection, and thruster-aided pose control.
This reduces the motion planning problem from kinodynamic to kinematic, making sampling-based motion planning algorithms accessible.
Empirically, such methods extend to high dimensional problems with practical sample efficiency <cit.>.
Sample efficiency for large dimensions is critical, since as described earlier, higher numbers of joints will be required for complex tasking in cluttered environments.
Gaining this property (sample efficiency for platforms with significant internal complexity – required for many critical application domains) while respecting the dynamics is the core capability that we pursue through this work.
Using the method detailed in Section <ref>, we structurally expect (A) higher precision end effector tracking and placement than planners with no dynamics knowledge as well as (B) rapid computability of valid plans in many-joint, obstacle rich scenarios.
We show in obstacle rich scenarios that for emplacement and extraction tasks, the proposed method can shift induced perturbation to a trivial moment during the plan, allowing the ZPM to fascilitate a high precision engagement with a target configuration.
The class of kinematic systems we assume stipulates that the body velocity is a joint-defined linear mapping of joint velocity.
These pfaffian constraints can be used at any configuration to identify a subspace of joint velocities where instantaneous joint motion yields zero perturbation (a sufficient condition being that the dimension of the internal configuration is greater or equal to the dimension of the position space).
These pfaffian constraints are termed a perturbation map, and this map provides a nonheuristic manifold that can be used to guide the system to rapidly find collision-free plans.
No objective function designing or tuning is intended to be a part of implementing the methods motivated in this work.
In Section <ref>, we provide background on kinematic representations of floating-base systems in application-oriented physical scenarios.
We then define and discuss key properties of the ZPM in Section <ref>, followed by an example of an end-effector tracking shapes without inducing motion on the base in Section <ref>.
In Section <ref> we cover the proposed method for kinematic planning on floating-base systems followed by Section <ref> where we demonstrate global path planning
using the proposed method.
Notably, the method outperforms (A) kinematic algorithms that do not leverage dynamical information and
(B) trajectory optimizations that find locally optimal, low performance solutions.
Finally, Sections <ref> and <ref> include discussion and conclusion of the results.
§ BACKGROUND: KINEMATIC REDUCTIONS OF LAGRANGIAN SYSTEMS
In this work, we seek to compute plans for floating-base systems when their motion model can be written as a first-order (kinematic) system expressed as pfaffian constraints connecting velocity on the shape space (joint space) B to velocity on the position space G.
These constraints are expressed as
ẋ_b = P(θ)θ̇
yielding a linear mapping, P (as a function of n joints θ∈ B) from joint velocity θ̇∈ TB to body velocity ẋ_b ∈ TG.
This structure will be shown to be an exact representation of the physics in conventional orbital environments and scenarios governed by Rayleigh dissipation.
We'll show that violations of this physical structure can be related to intuitive phenomena and suggest that in many cases these complexity inducing features can be designed out of the problem through mechanical design, environment selection, and control.
§.§ Obtaining Environmental Symmetry
Gravity directly violates environmental symmetry, inducing position and orientation dependent forces on the floating-base.
This will, for instance, cause the robot to move when the joints are still, clearly violating our assumed model at θ̇=0.
Gravity is however negligible for orbital robots as well as neutrally buoyant underwater systems[Some platforms such as RE2s sapien class system are designed to be neutrally buoyant to avoid dynamical complexity.]. In less convenient scenarios, one might supply a model-based control law to compensate for gravity with thrusters, through which one could regain environmental symmetry of the floating-base under this thruster-governed control law.
When we can assert symmetry of the dynamics over G for Lagrangian systems via a powerful reframing of the dynamics that provides three distinct equations written for the generalized momentum, body velocity, and joint acceleration of the system.
This result from the field of geometric mechanics is known as the reduced Lagrangian <cit.>.
§.§ Mitigating Interaction Between Inertia and Friction
When the generalized momentum starts at zero (as is the imagined case for many floating-base tasks), the dynamics of the body are governed exclusively by conservation of zero net angular and linear momentum.
The resulting dynamics take the form of (<ref>) and is referred to as the mechanical connection <cit.>.
A similar phenomena occurs when the motion model becomes dominated by Rayleigh dissipation.
Here, friction-governed forces applied along the body yield a viscous interaction with the fluid, again identical to the form in (<ref>) and termed the viscous connection <cit.>.
Coupling of forces driven by inertia and friction result in second-order behaviors.
Take the free-floating astronaut as a mechanical connection.
A brief friction-driven interaction of brushing a satellite imparts an uncancellable impulse on the astronaut.
Post interaction it has obtained a net momentum that it cannot cancel through internal motion, providing a drift that violates the first order structure of the mechanical connection.
Take a micro-swimmer governed by a viscous connection.
If we iteratively increase the inertia of the swimmer, it will eventually noticeably drift as a function of accumulated momentum while swimming.
This drift again violates the first order structure of the viscous connection.
Systems can be engineered to reduce interaction of these forces.
Underwater robots might move slowly to reduce excitation of momentum and leverage the structure of a drag-dominated environment (assuming gravity compensation).
Aerial robots might be designed to avoid arm-related drag to encourage isolation of momentum driven dynamics (assuming gravity compensation).
Orbital robots experience no environmental friction unless contacting objects in space.
§.§ Approximating Nonviscous Friction
In general, friction can certainly act in non-viscous ways, violating the structure of (<ref>).
One can think of the viability of the Rayleigh dissipation model as the viability of a first-order Taylor series approximation of the more general friction-governed motion model ẋ_b | θ = f(θ̇).
[In many cases this can be a complex, non-smooth function (e.g. in the case of coulomb friction) motivating the use of stochastic linearization to obtain useful model parameters. Isotropy of the friction can also have a significant impact on behavior <cit.>, further motivating the use of stochastic linearizations to build linearizations that support prediction over a desired range of velocities.]
However, it has been observed in practice that the assumption of a Rayleigh dissipation model provides a strong fit across a surprisingly broad class of robots <cit.>.
In this section we have detailed the motion model of interest and the relevant assumptions for dynamics and control to arrive at its structure.
We now outline a method which allows us to use sampling-based motion planning while respecting a broad class of nontrivial floating-base dynamics.
§ ZERO PERTURBATION MANIFOLDS
We now shift focus to explaining how the representations motivated in Section <ref> populate a manifold that can be exploited for motion planning that induces zero perturbation on the floating-base.
The linear mapping P(θ) has a null space computable at all shapes θ
null(P(θ)) ⊂ TB
which inhabits a subspace of the shape velocity space TB.
This null space provides a basis in the joint velocity space for exploration from a shape that will infinitesimally invoke zero perturbation on the base.
This null space changes as a function of θ, and so during a plan, the allowable motions available to the joints are subject to change.
When the null space changes continuously with respect to shape, the null space populates a connected submanifold of joint velocity space.
We know this since the notion of connectedness is preserved under continuous mappings.
However, the null space is provably not continuous in this sense at singular P where the rank of the nullspace increases over that of non-singular P.
Thus, the zero perturbation manifold (ZPM), null(P), has a basis of size dim(B)-dim(G) and continuously changes over its connected regions.
It is however a larger manifold that is not continuous, e.g. at locations of singular P.
Continuous or not, the null space indicates at all internal configurations instantaneous directions to move the joints that induce zero base perturbation.
Continuity of the null space is however important for generating C^1 smooth arm motions.
If a null space basis vector disappears while moving a joint in that direction, it will appear as a nonsmooth path in joint space (if persisting along the zero perturbation manifold).
This manifold is not unlike leveraging the null space of components of the manipulator Jacobian to stabilize elevation, roll and pitch when moving a weight along a table, as is done in work that in-part inspired this approach <cit.>.
In many classical manipulation scenarios, additional joints provide exploitable redundancy. Likewise, additional joints directly increase the dimension of the zero perturbation manifold. We make a final note about the challenge of connecting two specific points in the shape space.
The probability of the null space directly aligning with two randomly selected shapes is approximately zero:
∀δ∈ B: p(null(P(θ)))^T null(P(θ+δ)=1)≈0.
where p provides our single-case use of notation for probability.
While it is easy to grow a tree along the null space from a known joint location, it can be hard to grow the tree along the null space toward a known joint location.
One might think of an Ackermann steering vehicle (a system whose maneuvers are governed by pfaffian constraints) that is placed arbitrarily close to a final location, then asked to arrive at that location directly, without knowledge of Lie brackets or "parallel parking" maneuvers.
In many scenarios, there is not a direct route to that location.
In higher-degree of freedom floating base manipulation problems, it is almost never the case that such as direct route exists.
To our knowledge, it is an open problem whether structures exist to procedurally connect such proximal shape configurations along the zero perturbation manifold.
Lie brackets offer an interesting avenue for further inspection, but are not the subject of this work.
§ LOCAL END EFFECTOR TRACKING ALONG THE ZERO PERTURBATION MANIFOLD
Here we leverage the exploitable redundancy property discussed in Section <ref> to show that a many jointed, bimanual swimming robot can complete a welding task with a closed form solution.
This solution will implicitly leverage the structure of the zero perturbation manifold, which grows in dimension with each joint made available to the system.
We compare two platforms. A baseline platform has access to its kinematic chain, so induced disturbances on the body cannot be accounted for:
ẋ_ee = J(θ)θ̇.
It takes a pseudoinverse of the Jacobian to compute θ̇ given a desired velocity for the end effector ẋ_ee.
The second platform has access to both its kinematic chain and the model of the viscous fluidic model that it inhabits:
[ ẋ_ee; ẋ_b ] = [ J(θ); P(θ) ]θ̇.
where ẋ_b can be set to zero.
This toy system is a simple adaptation of the Purcell swimming model <cit.>, where we magnify the ratio of across-to-along body forces by a factor of 5, mimicking a more fin-like relationship with the water to excite larger perturbations.
In the trial, ẋ_ee is set by the residual between the location of the hand (with respect to the base) and the desired hand location (with respect to the base).
The desired hand location is a function of time that traces each shape (from right to left)).
The closest hand is driven to track the desired hand location, the other being free to use its neighboring links as a tail-like appendage if necessary to stabilize the base.
Fig. 1 shows that the thirteen-link swimmer was capable of exactly tracing the desired shapes with a closed form solution when leveraging the pfaffian form of the dynamics.
Meanwhile, the platform embedded with only kinematic-chain knowledge both drifted and was unable to draw shapes resembling those desired.
These results were not intended to capture phenomena such as collision-avoidance including self-collisions, but these topics are addressed in the next section.
§ RRT ON ZERO PERTURBATION MANIFOLDS
In Sections <ref> and <ref> we overviewed a key class of floating-based dynamics and its relevance for distinction of a zero-perturbation manifold that occupies TB.
Here, we will show how such a manifold can be used effectively within sampling-based motion planners for rapid exploration based planning on high degree of freedom floating-base robots in obstacle rich settings.
Equation (<ref>) shows us that connecting two points in the shape space can be challenging.
This suggests that objects like probabilistic roadmaps (PRMs) will be structurally challenging to generate, since the inter-connectedness of nodes requires direct confrontation of this issue.
Turning our attention to single-query planning approaches, RRT offers the ability to grow quickly from a known location.
As we have established, knowing the final location does not make it easy to find a plan that arrives exactly at the final location.
One might think of a car placed at a close distance but incorrect orientation to a desired pose.
While Lie brackets or primitive-based motion planning may offer a solution for the car, it is not obvious that such approaches are computationally tractable in the floating-base manipulation setting.
We observe three options, which include growing the RRT from an initial location to a desired location, growing a bi-RRT from both locations towards each other, and growing from the desired location toward the initial location.
We accept the third option as suboptimal but highly practical.
An RRT will be designed to rapidly grow from a location of interest toward the robots initial configuration, all the while restricting itself to the zero perturbation manifold.
The optimal path selected from this tree is the core contribution of this work.
The robot will start by connecting itself from initial configuration to the ZPM. Once it reaches the trajectory along the ZPM, we assert that it will stop while the thrusters mitigate any induced perturbation.
Now, after regulation, the floating-base system is afforded a zero perturbation path, through clutter, to the object of interest.
Here we will outline our approach for floating-base manipulation along zero perturbation manifolds.
Algorithm 1 is provided a start and goal internal configuration for the robots (θ_s,θ_g).
As described above, we will start at the goal location, where there is a higher need for precision (such as in emplacement and extraction tasking).
We initialize a tree T with this node and query a sample function at each iteration. This will provide a random internal configuration or the start configuration (with probability 0.5 in this work).
We then attempt to grow the tree toward this sampled configuration, while adhering to the zero perturbation manifold.
The computation details of this process are available in Algorithm 2, which involves computation of the null space via singular value decomposition (SVD) and projection of the error between goal and current point along the computed subspace of the internal configuration that carries zero eigenvalues.
If we avoid singular configurations (as is assumed in this algorithm), then the null space is consistently of rank(null(P))=dim(B)-dim(G) and can be indexed directly as the corresponding bottom rows of V.
We can project the growth direction onto this manifold (see line 9 of Algorithm 2 where unit produces a unit vector), and stop extending in the event of a timeout, collision, or perpendicular orientation of the null space and remaining error.
During extension, we grow the tree by adding a node after such a length has been extended from a prior node in the tree.
Once the search has timed out or achieved sufficient proximity to the goal configuration, the algorithm returns a smoothed path.
In this work, smoothing involves attempting to find a shortcut between the start and end point of the path by direct projection onto the ZPM, then recursively repeating the procedure on two randomly split segments of the path (until those segments reduce to two points).
This smoothing process is repeated three times in this work.
§ RESULTS
Here we will ask the thirteen-link Purcell swimmer to plan in constrained environments while executing an emplacement task.
This task will require that both hands reach a particular location in the workspace.
Through expectation that the base does not move, we stipulate a shape that will achieve the desired end effector locations and provide this desired shape as a goal to the planner.
As mentioned before, the planner will grow from the goal towards the starting location.
We identify that there are a variety of viable starting configurations from which to approach the emplacement task, and encode this within the planner.
When trying to connect to starting shapes θ_s, we will provide a randomly selected, generalized U-shape for the swimmer.
[A U includes a negative quarter rotation of a joint left of the base and a positive quarter rotation of a joint right of the base.]
We can provide this freedom of choice since we will assume that the system can start from a further displaced position, change configuration to any other U-shape, and then trivially re-engage at the position identified in the planner[Any such shape can translate in the coordinate perpendicular to the base link to exit or enter the starting configuration, making it a viable entry point to start the emplacement plan from.].
Thus, we claim to be solving a broader planning problem (starting from a disengaged, distal position with any U-shaped initial joint configuration) even though we only explicitly solve the non-trivial component of the planning problem (once engaged in an obstacle rich environment).
We start with one obstacle and provide the RRTZPM 30 separate trials to find connections to U-shapes along the zero perturbation manifolds.
We then ask the RRT to connect the same two configurations.
In each trial we allow the planner to replan 4 times after 10s intervals if a collision free path is not found.
Paths are termed collision free if they would not cause a collision with a static base.
In every trial both planners succeed during the first 10s interval.
Fig. 3 shows the displacement as a function of a fraction of path length, spanning the start and finish of the attempt.
It is clear that the proposed method accrues a brief initial displacement to access the zero peturbation manifold (distinguishing the effect of the properties following from (<ref>).
However, once on the ZPM, the platform is capable of achieving the desired configuration with zero displacement.
Meanwhile, the classical RRT accrues error, critically during the final steps of engagement of the desired internal configuration. This classical approach persistently accrues error as it approaches the final location, exactly when the proposed method accrues zero error by structure.
We see this relationship persist as a second and third obstacle are added. Notably, during trials with 3 obstacles, the RRT unsuccessfully attempts to find a collision free path in 20 of 30 trials.
Across all numbers of obstacles, the proposed method (for each obstacle) experiences an initial perturbation of the base followed by an approximately zero perturbation approach to the final configuration.
We observe this difference in performance in Fig. 2, where we can see the drift over the duration of the trial for the classical RRT on the top row, as well as the lack of drift for the example on the bottom row.
In obstacle-free environments we compare this approach to an iterative time varying linear quadratic regulator (TVLQR).
This provides the opportunity to observe the exploration-driven solutions available via RRTZPM with respect to this exploitation-driven approximation of the Hamilton-Jacobi-Bellman equation.
The iterative TVLQR is seeded with a direct route from initial to final shape for the linear system:
[ ẋ_b; θ̇ ] = [ P(θ); I ]θ̇.
where we take the shape velocity θ̇ as the control input.
We discretize each trajectory at 50 samples. For n=dim(B) and m=dim(G) take
Q=[ I^mxm 0; 0 0 ] Q_f=[ I^mxm 0; 0 I^nxn ] R=Ie^-2
where Q_f is the final element of Q.
By iteratively supplying the output of a TVLQR solution to itself, we provide the opportunity to further refine the solution with the context of the dynamics centered at the most recent solution.
At the end of 20 iterations we have a trajectory that has been locally refined to find the desired joint location while avoiding perturbations.
We ask the swimmer to achieve the previously identified emplacement shape configuration from randomly selected U-shapes (this time the swimmer cannot select a U-shape.)
Fig. 4 shows that over 30 trials we observe the RRTZPM is capable of accessing the ZPM, although it accessed later in the maneuver than in Fig 3 when it had the choice of starting U-shape locations.
The iterative TVLQR critically accrues error as it arrives at the emplacement location, repeating the precision mitigating behavior seen in the RRT in Fig. 3.
We can see impact of this error at the emplacement location in Fig. 5, where the error accrued through iterative TVLQR renders the platform unable to complete the emplacment task.
§ DISCUSSION
The active use of the other arm to decouple the generated drag force in Fig. 1 reminded us of the use of tails in many animals.
This may provide inspiration for design of systems in similar dynamic environments.
Overall, the end effector tracking results of Fig. 1 are not entirely surprising (prior work exists in null space projection for swimming robots <cit.>).
The example reinforces what is known in manipulation, that redundancy offers a greater variety of options to achieve the desired result.
Observing this property in the dynamic case while tracking shapes informs us that this property, since it exists for a local tracking problem, may be possible to embed in a more global planning problem with more constraints.
The emplacement performance of the RRTZPM method in Fig. 3 and Fig. 2 shows us that it was possible to embed this property in a high degree of freedom platform with three obstacles. It is clear from comparison to the RRT planner (that neglects dynamics) that the ZPM is providing significant value. The practical relevance of this result is made clear in Fig. 2, where the RRT platform can be seen colliding and drifting from the desired location. Such precision is unacceptable for even coarse emplacement tasking.
This relationship persists when we relate the motivated method to iterative TVLQR in an obstacle-free environment. The exploitation driven algorithm can locally refine the initial trajectory, but is unable to find a more global solution that permits acceptable precision for the emplacement task. This supports the perceived value of our method which can respect dynamics while finding inventive solutions rapidly.
The measured perturbation rejection on these simulated systems can be impacted largely by the time-step of the integration. In principle, simulating at a small enough time step would permit a numerically zero perturbation when integrating along the ZPM. For real systems, feasibility of precisely tracking shape evolution along the ZPM may significantly impact the quality of the overall plan.
§ CONCLUSION
The results suggest that the proposed methods provide a novel class of algorithms that can address the weaknesses of classical sampling-based motion planners and optimization-based approaches.
We showed that we were able to maintain the "sampling-scalability at high dimension" feature of RRTs while providing them the dynamical context to avoid precision-related failures.
We showed that the proposed method maintained the exploration-geared inventiveness of RRTs, permitting viable solutions where exploitation-driven optimization schemes will fail.
Our examples are restricted to a planar swimming simulated system. Further questions will remain pertinent in extensions of these results to other systems, simulated or real.
How viable are pfaffian constraint approximations for systems that do not exist exactly in high or low Reynolds domains?
How many degrees of freedom are typically needed to achieve similar results in three-dimensional scenarios?
How easy is it to execute trajectories along this manifold with conventional hardware?
In future work we plan to explore open questions directly on hardware in underwater and orbital domains.
|
http://arxiv.org/abs/2307.01313v1
|
20230703193644
|
Magnetic field fluctuations in the shocked umbral chromosphere
|
[
"T. Felipe",
"S. J. González Manrique",
"C. R. Sangeetha",
"A. Asensio Ramos"
] |
astro-ph.SR
|
[
"astro-ph.SR"
] |
Instituto de Astrofísica de Canarias
38205 C/ Vía Láctea, s/n, La Laguna, Tenerife, Spain
Departamento de Astrofísica, Universidad de La Laguna
38205, La Laguna, Tenerife, Spain
Leibniz-Institut für Sonnenphysik (KIS), Schöneckstr. 6, 79104 Freiburg, Germany
Astronomical Institute, Slovak Academy of Sciences, 05960 Tatranská Lomnica, Slovak Republic
Magnetic field Fluctuations in HeI 10830 Å
Felipe et al.
Umbral chromospheric observations show the presence of magnetoacoustic shocks. Several recent studies have reported magnetic field fluctuations associated with those shock waves. The mechanism behind these periodic magnetic field changes is still an unsolved question.
We aim to study the properties and origin of magnetic field fluctuations in the umbral chromosphere.
Temporal series of spectropolarimetric observations were acquired with the GREGOR telescope on 2017 June 18. The chromospheric and photospheric conditions, including the temporal evolution of the magnetic field, were derived from simultaneous inversions of the HeI 10830 Å triplet and the SiI 10827 Å line using HAZEL2 code. The oscillations are interpreted using wavelet analysis and context information from UV observations acquired with the Atmospheric Imaging Assembly on board the Solar Dynamics Observatory (SDO/AIA) and the Interface Region Imaging Spectrograph (IRIS).
The chromospheric magnetic field shows strong fluctuations in the sunspot umbra, with peak field strengths up to 2900 G. These inferred field strength is comparable to the magnetic field strength in the upper photosphere. Magnetic field and velocity umbral oscillations exhibit a strong coherence, with the magnetic field lagging the shock fronts detected in the velocity fluctuations. This points to a common origin of the fluctuations in both parameters, whereas the analysis of the phase shift between photospheric and chromospheric velocity is consistent with upwards wave propagation. These results suggest that the strong inferred magnetic field fluctuations are caused by changes in the response height of the HeI 10830 Å line to the magnetic field, which is sensitive to high photospheric layers after the shock fronts. The analysis of EUV data shows a weak brightening in a coronal loop rooted in the umbra around the time of the measured magnetic field fluctuations. This coronal activity could possibly have some impact on the inferred fluctuations, but it is not the main driver of the magnetic field oscillations since they are found before the EUV event takes place.
Chromospheric magnetic field fluctuations measured with the HeI 10830 Å triplet arise due to variations in the opacity of the line. After strong shocks produced by the propagation of slow magnetoacoustic waves, the response of the line to the magnetic field can be shifted down to the upper photosphere. This is seen as remarkably large fluctuations in the line of sight magnetic field strength.
Magnetic field fluctuations in the shocked umbral chromosphere
T. Felipe,
1,[email protected]
S. J. González Manrique,
1,2,3,4
C. R. Sangeetha,
1,2
A. Asensio Ramos1,2
Received ; accepted
==================================================================================================================================================================================
§ INTRODUCTION
Magnetohydrodynamic waves are a fundamental constituent of the solar magnetized atmosphere <cit.>. These waves arise when acoustic p-modes interact with the surface magnetic field <cit.> or can be generated by in-situ magnetoconvection taking place in active regions <cit.>. They can then propagate to the higher solar atmosphere through magnetic field elements such as sunspots, plage, or magnetic bright points <cit.>. The detection and study of the properties of these waves play an important role in understanding the dynamics of the solar atmosphere and their contribution to the heating of the outer layers. Most of the studies so far have focused on Doppler and/or intensity fluctuations since they are relatively easy to measure and many of the wave modes leave an imprint on these parameters.
Over the past years, observations have also revealed oscillations in the sunspot magnetic field. Photospheric analyses have reported magnetic fluctuations with various properties <cit.>, including a broad range in their amplitude (between a few Gauss and ∼300 G) and period (from a few minutes to several days). The origin of these oscillations is still not well understood due to the lack of consistency among different works. The situation is even more complex in the chromosphere, where the assumption of local thermodynamic equilibrium (LTE) is not valid, and magnetic field inferences rely on sophisticated and computationally expensive inversions under non-LTE <cit.>.
Despite these challenges, in recent years some works have addressed the study of chromospheric magnetic field variations associated with the shocks that arise due to the steepening of magnetoacoustic waves in sunspots. These shocks usually manifest as brightenings in the core of some chromospheric lines, known as umbral flashes <cit.>. Independent analyses of the same spectral line have revealed contradictory results. The first attempt to measure magnetic field fluctuations in the CaII 8542 Å line found no indications of oscillations in the sunspot umbra, but detected oscillations with an amplitude of ∼200 G in the penumbra <cit.>. <cit.> reported a weaker magnetic field during umbral flashes, whereas <cit.> found that the magnetic field can be up to ∼270 G stronger in umbral flashes. However, the analysis of synthetic CaII 8542 Å profiles, constructed by synthesizing this line in simulated atmospheres, revealed that the degeneracy of the inversion problem can lead to the inference of spurious magnetic field fluctuations that are not present in the actual atmospheric model <cit.>, suggesting caution with the interpretation of magnetic field fluctuations inferred from the CaII 8542 Å line.
In this study, we aim to investigate chromospheric magnetic field oscillations using the HeI 10830 Å line. This line has been previously employed to analyze magnetic field fluctuations by <cit.>, who reported changes in the transversal magnetic field with amplitude up to ∼200 G. The organization of the paper is as follows: in Sect. <ref> we describe the observations and the analysis methods, in Sect. <ref> we present the results, in Sect. <ref> we discuss our findings and, finally, results are summarised in Sect. <ref>.
§ OBSERVATIONS AND DATA ANALYSIS
§.§ GREGOR/GRIS observations
We observed the sunspot NOAA 12662 on 2017 June 18 (located near the disk center, μ=0.97, with μ defined as the cosine of the heliocentric angle) using the GREGOR Infrared Spectrograph <cit.> attached to the GREGOR telescope <cit.>. The spectral window covers a region of approximately 18 Å, including the HeI 10830 Å line triplet and the neighboring SiI 10827 Å line, among other spectral lines. The full Stokes spectra were acquired with an exposure time of 100 ms and ten accumulations. Standard polarimetric calibration <cit.> was applied to the observations, with the calibration data obtained from the GREGOR polarimetric calibration unit <cit.>. Standard dark and flat-field reductions were applied to the data. Data were acquired between 08:02 and 09:37 UT by placing the spectrograph slit at a fixed position crossing the middle of the sunspot. The analysis presented in this work is restricted to a temporal series of roughly 37 min (starting at 08:02 UT) with a temporal cadence of 5.6 s (a total of 393 Stokes spectra). These data have been previously analyzed in <cit.> and <cit.>.
Figure <ref> shows the continuum intensity taken from the space-borne telescope Helioseismic and Magnetic Imager <cit.> on-board Solar Dynamics Observatory <cit.>, including the approximate location of the spectrograph's slit. In this study, we will focus on the analysis of the sunspot umbra.
§.§ Spectropolarimetric inversions
We have used the HAnle and ZEeman Light v2.0 [The HAZEL2 code can be found in . The
user manual with detailed instructions for the usage of the code and precautions to be taken can be found in .] <cit.> code to invert the HeI triplet, along with the SiI and telluric lines. The simultaneous inversion of these two neighboring lines is carried out to account for their wings due to their very close proximity to the HeI triplet. HAZEL2 incorporates the effect of atomic level polarization and Paschen-Back, Hanle and Zeeman effects for the HeI triplet. The simultaneous inversion of the SiI line is computed with the Stokes Inversion based on Response functions <cit.> code, whereas the telluric line is fitted with a Voigt profile. Hence, the output from HAZEL2 inversions provides the atmospheric information from both the photosphere (derived from the SiI 10827 Å) and the chromosphere (derived from the HeI 10830 Å).
The inversion is performed through an iterative process where an initial guess atmosphere is perturbed until the output radiation from the model matches the observed Stokes profiles. In our SiI 10827 Å inversions, we have used the hot sunspot model from <cit.> as the initial guess atmosphere. The above-mentioned perturbations are applied at some selected optical depths, known as nodes. The process can be repeated several times (so-called cycles) with a different number of nodes each, where the initial guess atmosphere from a cycle is given by the output from the previous cycle. Table <ref> shows the inversion scheme employed in our analysis, that is, the number of nodes selected for each of the three cycles. In the case of the HeI triplet, HAZEL2 inversions employ a cloud model where the atmospheric parameters are constant in a slab above the solar surface. Hence, in Table <ref> a 0 indicates that the corresponding atmospheric parameter is not inverted, while 1 means it is inverted.
The Stokes profiles were fitted using a two-component model for each pixel. One of the components corresponds to the inferred atmosphere whereas the other accounts for the stray light. The stray light represents a spurious light coming from distant spatial locations that contaminates the signal measured at a specific spatial position. This is due to the varying seeing conditions, diffraction effects, and the optical properties of the telescope. We employed a constant stray-light profile for all locations and times in the temporal series. It was computed as a spatio-temporal average of a quiet Sun region around the observed sunspot. The inversions assume that the radiation observed from a resolution element is produced by the joint contribution of the solar atmosphere at that position and the spurious stray light. The amount of stray light is variable (depends on the location and time step) and is given by the filling factor, a free parameter that is also inverted. Figure <ref> shows a comparison between the observed and inverted profiles at a randomly chosen time and umbral location. The inversion provides a good fit of the observed Stokes I and V profiles for both spectral lines. Independent inversions were also carried out without accounting for the stray-light correction. They exhibit significant quantitative differences (for example, in the magnetic field strength), but the qualitative results are similar.
§.§ SDO/AIA observations
The interpretation of the results is supported by data from the Atmospheric Imaging Assembly <cit.> instrument onboard SDO. AIA acquires observations of the full solar disk in seven extreme ultraviolet (EUV) bands (94 Å, 131 Å, 171 Å, 193Å, 211 Å, 304 Å, and 335 Å) with a temporal cadence of 12 s, in addition to two ultraviolet wavelengths (1600 and 1700 Å) with a cadence of 24 s. The EUV wavelengths effectively respond to temperatures from 10^5.5 to 10^7.5 K <cit.>, whereas the UV emission is dominated by contributions from the lower solar atmosphere. The data has a spatial scale of 0.6pixel^-1. We use level 1.0 data co-temporal to the GREGOR/GRIS observations.
§.§ IRIS observations
We have also examined observations from the Interface Region Imaging Spectrograph <cit.> as context images of the chromosphere and transition region. Between 07:55:56 and 08:14:02 UT (partially overlapping our GREGOR observations), IRIS was acquiring a raster map of the same active region. We are interested in the slit-jaw images. They were taken in filters in the near-ultraviolet (NUV; 2830 Å and 2796 Å) and far-ultraviolet (FUV; 1330 Å and 1400 Å) with a temporal cadence of 68 s, for a total of 16 images per filter. These slit-jaw filters mostly sample the upper photosphere (2830 Å), chromosphere (2796 Å), upper chromosphere (1330 Å), and low transition region (1400 Å).
§ RESULTS
§.§ Chromospheric velocity and magnetic field fluctuations
Figure <ref> illustrates the inverted maps of chromospheric LOS magnetic field and velocity derived from HAZEL2. As expected, stronger magnetic fields are found in the umbra, especially in the umbral region seen darker in continuum intensity (Fig. <ref>), where the LOS magnetic field periodically fluctuates in the range 1900-2900 G. Other umbral locations exhibit a weaker magnetic field, with a typical strength around 1600 G.
The velocity signal shows the well-known umbral chromospheric oscillations in the three-minute band. Clear indications of shocks are also found, with sudden changes from red (downflows) to blue (upflows). In the penumbral regions, the pattern of running penumbral waves is clearly seen as wavefronts that reach farther radial distances as time increases. Interestingly, the phase of the wavefronts exhibits coherence in the dark umbra (and towards the right-hand side penumbra), but a marked change in the phase is found at x∼ 29-30, the approximate location of the innermost umbral dot. This may indicate some differences in the excitation of the waves in both regions <cit.> or differences in the travel time of the waves along the magnetic field lines.
Figures <ref> and <ref> show the temporal evolution of the LOS magnetic field and velocity at two different locations of the sunspot umbra. The maximum amplitude of velocity oscillations is around 10 kms^-1. This high amplitude, comparable to the local sound speed, leads to the development of chromospheric shocks, which are seen as a progressive rise in the velocity evolution followed by a steeper fall after reaching the maximum amplitude. These sudden velocity changes, characteristic of the shock, are generally accompanied by peaks in the LOS magnetic field. In the umbra, the background magnetic field is around 2000 G. Magnetic field enhancements up to 900 G above that background field strength are found to lag the velocity signal. The stronger magnetic field excursions found in Fig. <ref> take place after the stronger velocity shocks (around 08:20 UT), but smaller peaks with more modest magnetic field strength increments of ∼300-500 G are measured during the whole temporal series. Strong magnetic field excursions are not always associated with the highest velocity amplitudes. For example, at around 08:06 UT in Fig. <ref> a magnetic field peak of 2700 G is found during a velocity shock with 5 kms^-1 amplitude.
§.§ HeI 10830 Å spectral profiles during magnetic field excursions
Figure <ref> illustrates the temporal evolution of HeI 10830 Å Stokes I and V profiles during the development of one of the largest magnetic field fluctuations in the sunspot umbra. The first illustrated time step is just before a shock. The line exhibits a large Doppler shift to the red (v_ LOS=8.39 km s^-1), but the magnetic field is still at the quiescent stage, with a strength around 1900 G. The second and third time steps (two middle rows) capture the development of the shock (see Fig. <ref>). Between these times the Doppler velocity changes from 7.82 km s^-1 to -7.67 km s^-1 over a temporal span of 22.4 s, while the LOS magnetic field strength shows a striking enhancement. During this shock, the absorption depth of the line is greatly reduced, but no emission is found in HeI 10830 Å intensity. At the last time step, the line is again in the quiescent state, where the probed magnetic field strength has returned to the background values in the range 1900-2000 G.
§.§ UV data analysis
The magnetic field strength measured during the peaks is much higher than that inferred with the HeI 10830 Å triplet in other sunspots <cit.>. In contrast, strong magnetic fields have been measured with the HeI 10830 Å <cit.> and the HeI D_ 3 <cit.> lines after energetic events taking place at higher coronal layers. To extend our study to the transition region and corona and support the interpretation of the results, we have employed EUV images acquired with SDO/AIA and NUV and FUV data from IRIS.
A visual inspection of the EUV data prior to and during the GREGOR temporal series employed in this study revealed a brightening in one of the coronal loops rooted to the sunspot umbra. Figure <ref> shows maps of SDO/AIA data at three selected filters and several times during this brightening. Coronal loops (especially visible in 171 Å) are only present on one side of the sunspot. The brightening, visible in the three filters but more clearly in 304 Å, develops in the loop marked by a blue-dotted line during those time steps. The temporal evolution of the EUV intensity along this loop is illustrated in Fig. <ref>. At around 08:10 UT, some brightness increase is detected at s=20, where s is the distance measured along the loop. The intensity peak is reached 4-5 min later. During all the event, the brightening is restricted to the same region around s=20. The temporal evolution of the average signal in the brightening region (white square in Fig. <ref>) is shown in Fig. <ref>. All the filters exhibit a progressive increase in intensity between 08:10 and 08:13 UT, followed by a sudden intensity enhancement. Later, the EUV emission in the brightening region returns to approximately the same values it has before the event. The beginning of this brightening is also visible in the IRIS slit-jaws images at 2796 Å, 1330 Å, and 1400 Å. However, we do not discuss it in depth since the finish time of IRIS observations (08:14 UT) before the peak of the event does not allow a comprehensive study.
At the sunspot umbra, no significant changes in the EUV intensity are found. We also do not find enhanced emission in IRIS slit-jaws at 1330 Å and 1400 Å. Only the 2796 Å filter often exhibits clear emission from umbral flashes <cit.>. The examination of the umbra in NUV, FUV, and EUV shows no indications that the inferred magnetic field fluctuations are associated with energetic events taking place at the transition region or corona.
§.§ Wavelet analysis
To further explore the oscillatory nature of the velocity and magnetic field fluctuations, we have performed a wavelet analysis <cit.> of both variables. The wavelet transform (WT) decomposes a time series into time and frequency domains, allowing the determination of the dominant modes and their temporal evolution. Wavelet analysis can also be employed to compute the wavelet cross spectrum (WCS) between two time series. This quantity exhibits a large value when both signals have large power at similar frequencies and around the same time and, more interesting, it can be used to derive the phase difference as a function of time and frequency. Wavelet analysis is a common approach for the study of oscillations in the solar atmosphere <cit.>.
Figure <ref> shows the wavelet power of the photospheric and chromospheric velocities and the power of the WCS between both signals for one umbral location. The photospheric and chromospheric velocity power are consistent with the results from many previous studies of umbral oscillations. Photospheric velocity oscillations are dominated by fluctuations in the 5-minute band (3-4 mHz), whereas at the chromosphere the main power is shifted to the 3-minute band (with a peak at around 6-7 mHz in this dataset). The power also exhibits some variations during the temporal series. As expected, higher chromospheric power is found during the time steps where the amplitude of the chromospheric oscillations is higher. Interestingly, at those times (08:15-08:25 UT) there is also a photospheric power increase in the 3-minute band (around 6 mHz). This photospheric power is possibly the predecessor for the chromospheric counterpart <cit.>.
The phase angle of the WCS provides the phase difference between the two signals employed for its computation. This is equivalent to computing the phase of the Fourier cross spectra, but the use of wavelets allows the evaluation of the changes in the phase difference with time. In the following, only the information from spectral regions inside the cone of influence (discarding those parts where the edge effects are relevant) and with a confidence level in the WCS above 95% are considered for the interpretation of the results. We have computed the phase angle for all the locations belonging to the dark part of the umbra (20 spatial positions, excluding the umbral region where umbral dots are abundant). Figures <ref> and <ref> illustrate the measured phase difference and coherence (as a function of frequency) between the photospheric and chromospheric velocities for two selected times. Each circle in the top panels represents the phase difference at one umbral position at a given frequency. A positive phase shift indicates that the chromospheric velocity lags the photospheric velocity. The bottom panels show the coherence spectra. They indicate the statistical significance of the phase difference between two signals, according to the phases measured for a certain number of pairs of signals (in our case, 20 pairs). In addition to the confidence level of the WCS, we also employ coherence as an index to validate the phase results. We consider the measured phase differences to be relevant when the coherence is above 0.7 (horizontal black dashed lines in bottom panels from Figs. <ref> and <ref>).
Figure <ref> shows the phase difference spectra between photospheric and chromospheric velocities at 08:13 UT, that is, just before the sudden brightening in the EUV emission (Fig. <ref>) and before a strong vertical magnetic field is inferred from the HeI 10830 Å triplet (Fig. <ref>). The phase shift of frequencies below ∼3.5 mHz is not reliable since both the confidence and coherence are low. Above that frequency, the phase shift progressively increases, which is a clear indication of the presence of upward wave propagation between the two layers probed by the SiI 10827 Å and HeI 10830 Å lines.
After the EUV brightening (∼ 08:20 UT), the phase spectrum between photospheric and chromospheric velocities exhibits a similar behavior (Fig. <ref>), that is, the phase difference increases with the frequency. However, at this time step, the slope of the increment is lower. This indicates a lower phase shift between the velocity in both layers and points to some differences in the probed oscillations. To illustrate these differences, we have fitted the phase shift to a model of linear wave propagation in a gravitationally stratified isothermal atmosphere with radiative losses, following <cit.>. The model has three parameters: temperature (T_ 0), height difference between both signals (Δ z), and the radiative cooling time (τ_ R). For the temperature, we selected the average umbral temperature inferred from the inversion of the SiI 10827 Å line (T_ 0=4160 K). The other two parameters were obtained from a manual fit of the phase difference to the model. We found that the phase spectrum prior to the peak of the EUV brightening is better explained by a model with Δ z=900 km, in close agreement with <cit.> and <cit.>. In contrast, a few minutes after the EUV brightening, the phase spectrum is best fitted with a model with Δ z=450 km.
Figure <ref> illustrates the wavelet analysis of the WCS and the phase spectra between the LOS velocity and LOS magnetic field measured at the chromosphere from the inversion of the HeI 10830 Å line. A highly significant (above the 95% confidence level and total coherence) phase delay of 133^∘± 28^∘ is measured, with the magnetic field lagging the velocity. This is consistent with the velocity and magnetic field fluctuations illustrated in Figs. <ref> and <ref>.
§ DISCUSSION
§.§ Magnetic field fluctuations and opacity effect
The interpretation of the large LOS magnetic field fluctuations measured in our observations poses a compelling challenge. In the umbra, the LOS magnetic field increases from ∼2000 G to more than 2900 G (Fig. <ref>), possibly the strongest magnetic field strength ever measured with the HeI 10830 Å line. Other independent inversions of the HeI 10830 Å line in sunspots (analyzing raster maps instead of temporal series with short scanning cadence) have inferred magnetic field strengths around 1500 G <cit.>. This strong magnetic field is hard to be understood as a manifestation of the actual chromospheric magnetic field. Instead, we hypothesize that these large fluctuations in the magnetic field are produced by remarkable changes in the response height of the HeI 10830 Å triplet to the magnetic field.
We have evaluated the vertical magnetic field gradient in our observations by comparing the photospheric and chromospheric magnetic fields. The photospheric magnetic field at logτ = -2 is inferred from the SiI 10827 Å line, whereas the chromospheric field is derived from the HeI 10830 Å triplet. Figure <ref> shows the variation of the magnetic field at both atmospheric layers as a function of the position along the slit. For each spatial position, the median of the complete temporal series has been computed. As expected, the magnetic field strength decreases with height. The umbral magnetic field strength in the photosphere is higher by a factor of 1.1-1.4 compared to the chromosphere. This factor is slightly lower than that measured by <cit.>, but consistent since our estimation of the photospheric magnetic field corresponds to a higher photospheric layer.
Figure <ref> illustrates the correlation between the amplitude of the HeI 10830 Å magnetic field fluctuations (δ B_ LOS[ch]) and the magnetic field gradient between the photosphere and chromosphere (Δ B_ LOS[ph-ch]). In this diagram, each red dot represents a location in the umbra. Δ B_ LOS[ph-ch] was computed as the difference between the two median magnetic fields shown in Fig. <ref>. The following procedure was carried out to determine δ B_ LOS[ch]: (i) the HeI 10830 Å magnetic field temporal series at the selected location was smoothed by averaging in 7 time-steps windows (∼39 seconds), (ii) the time steps of all the maximums (minimums) were estimated by selecting those steps whose field strength is higher (lower) than the two adjacent times, (iii) all the maximums (minimums) with a magnetic field strength above (below) the 90% (10%) percentile were averaged, and (iv) the difference between the averaged maximum and minimum was computed. Step (iii) was performed to discard local maximums (minimums) that are not representative of magnetic field peaks (valleys).
A positive correlation between magnetic field fluctuations measured in HeI 10830 Å and the photosphere-chromosphere gradient is found at those places where Δ B_ LOS[ph-ch]≥ 500 G. The strongest magnetic field fluctuations measured in HeI 10830 Å are found at those locations where the photospheric magnetic field is much higher than the chromospheric field. This result strongly supports the interpretation of the HeI 10830 Å magnetic field oscillations as changes in the response height of the line, at least for the strongest magnetic field peaks. At umbral locations with lower Δ B_ LOS[ph-ch] no clear correlation is found in Fig. <ref>. The dispersion in the results is expected. First, the amplitude of the measured magnetic field fluctuations will depend on the magnitude of the excursions in the response of the line and, thus, Δ B_ LOS[ph-ch] does not fully characterize the amplitude of the oscillations. Second, we are evaluating the fluctuations in the magnetic field along the LOS. Variations due to changes in the orientation of the magnetic field are also expected to coexist with the opacity effect. All in all, we consider the relationship between δ B_ LOS[ch] and Δ B_ LOS[ph-ch] to be highly significant and a strong support of the opacity effect as the main effect behind the HeI 10830 Å magnetic field fluctuations.
§.§ Origin of the magnetic field fluctuations
The large value of the magnetic field fluctuations inferred with the HeI 10830 Å (e.g., Fig. <ref>) and their dependence on the underneath magnetic field (Fig. <ref>) suggest that the HeI triplet probes the magnetic field of the high photosphere during shocks. In this section, we discuss the origin of those shocks that can produce such a large displacement in the height response of the line.
Previous studies have reported strong magnetic fields inferred with neutral helium in active regions associated with energetic events. <cit.> reported
LOS magnetic fields up to 2400 G in HeI 10830 Å after a supersonic coronal downflow impacting the lower umbral atmosphere. Using observations of the HeI D_ 3, <cit.> inferred a 2500 G magnetic field in a flare footpoints. <cit.> also observed a remarkable magnetic field increase in HeI 10830 Å during a flare. All these works suggested that neutral helium lines form at deeper layers during the analyzed events.
Here, our observations reveal similar enhancements in the magnetic field strength but also notable differences. The more relevant is that our data exhibit no trace of HeI 10830 Å in emission. None of the large magnetic field fluctuations we have reported are associated with emission profiles. Also, we do not find enhanced emission in any of the AIA channels at the umbral locations where strong magnetic fields take place, as opposed to <cit.> and <cit.>. Our examination of the AIA EUV channels revealed a brightening in one of the coronal loops that is anchored to the umbra. Although this event could potentially be related to some of the measured magnetic field peaks, we note that strong magnetic fields, up to 2800 G, are also inferred before that brightening takes place (compare Figs. <ref> and <ref>).
The chromospheric umbral velocity measured in our observations is undoubtedly the signature of slow magnetoacoustic waves, as previously reported by many authors <cit.> and proved by the inferred upward propagation of longitudinal waves (Figs. <ref> and <ref>). The detected magnetic field fluctuations exhibit a strong coherence with the velocity oscillations (Fig. <ref>), pointing to a common origin of both signals. Thus, we conclude that the changes in the formation of the HeI 10830 Å produced by slow magnetoacoustic wave shocks are the origin of the magnetic field enhancements.
The HeI 10830 Å opacity is determined by the photonionisation rate in the HeI ground state continuum and the chromospheric electron density <cit.>. The former is given by coronal and transition region irradiation, which we assume does not change significantly during the analyzed process since we do not observe intensity enhancements in AIA or IRIS observations. We speculate that fluctuations in the chromospheric electron density are the main contribution of sunspot oscillations to changes in the HeI 10830 Å opacity, which manifests as fluctuations in the inferred magnetic field. These changes in the opacity shift the response of the line even to high photospheric layers, where the population of triplet state helium is low <cit.>. The strong downflows from the shocks, combined with the longer relaxation timescales for recombination from HeII in a nonequilibrium state <cit.>, may populate the helium triplet state in those low atmospheric layers, as suggested by <cit.>.
The impact of the downflowing material coming from the brightening event identified in EUV data could potentially modify the measured oscillations. For example, we have detected some differences in the phase shift between the velocity oscillations inferred from the HeI 10830 Å and the SiI 10827 Å lines (compare Figs. <ref> and <ref>). After the brightening, the phase shift can be fitted with a theoretical model of wave propagation with a lower height difference between the atmospheric height of both signals. However, this fitting must be interpreted with care. The model assumes linear wave propagation, but recent findings indicate that umbral chromospheric oscillations are not propagating but are stationary instead <cit.>. The estimated height difference should be interpreted as the height difference between the formation height of the SiI 10827 Å line and the atmospheric layer where stationary umbral oscillations start, since above that layer the phase of the wave is constant. Also, the formation height of the lines fluctuates. These changes can be striking in the case of the HeI 10830 Å triplet, as proven by our examination of the magnetic field fluctuations. The atmospheric layers where the triplet is sensitive change at temporal scales shorter than most of the periods probed in our wavelet analysis.
§.§ Alternative scenarios of magnetic field fluctuations
In this section, we discuss several models for magnetic field oscillations that can be proposed as alternatives to the opacity effect. We argue that none of them can satisfactorily explain our measurements.
Magnetized atmospheres can support three different wave modes: fast and slow magnetoacoustic waves and the Alfvén mode. Strictly speaking, this separation only holds for homogeneous plasmas, where the three wave branches are effectively decoupled, but these terms are generally employed for inhomogeneous plasmas like those found in the Sun. In a medium where magnetic pressure is much higher than gas pressure, like the umbral chromosphere, slow magnetoacoustic waves behave like acoustic waves propagating mainly along the magnetic field lines. They barely produce changes in the magnetic field since their oscillations are longitudinal and, thus, their motions are mostly directed along the field lines. In contrast, fast and Alfvén waves can generate LOS magnetic field fluctuations at a given atmospheric height through two different processes <cit.>: compression, where horizontal motions compress and expand the field lines density, and bending, in which the horizontal motions cause changes in the orientation of the magnetic field lines with respect to the observer. The former corresponds to fast magnetoacoustic waves, whereas the latter is associated with Alvén waves. The V-B phase expected for the compression mechanism is -90^∘ (magnetic field fluctuations leading velocity fluctuations), while in the case of the bending mechanism it should be 0 <cit.>. Our phase shift measurements cannot be explained by these processes (Fig. <ref>).
Under the bending mechanism scenario, we could assume that the strongest chromospheric LOS magnetic field (∼2900 G) corresponds to the times when the magnetic field vector is directed along the LOS. A background magnetic field (not associated with post-shock atmospheres) of ∼2000 G would require the field vector to be inclined ∼46^∘ from the LOS, implying a transversal field strength around 2100 G. Such a strong transversal magnetic field would leave an imprint in the linear polarization signal, which is not present in our observations (in the umbra, no Stokes Q and U signals are detected above the noise level). In addition, <cit.> found that shock waves produce small changes in the inclination of the magnetic field (below 8^∘). Our results cannot be interpreted as the manifestation of changes in the direction of the magnetic field vector.
As previously discussed, the chromospheric velocity oscillations measured in the umbra are produced by slow magnetoacoustic waves. The strong coherence of the HeI 10830 Å velocity oscillations with the detected magnetic field fluctuations indicates that both signals are caused by the same phenomenon. Thus, we can discard fast magnetoacoustic and Alfvén waves as the origin of these magnetic field oscillations.
The propagation of waves along flux tubes imposes certain boundary conditions that lead to the existence of many oscillatory eigenmodes, which can be discriminated according to their radial structure, the number of nodes in the azimuthal direction, and their wave speed <cit.>. In recent years, several works have claimed the detection of those resonant modes in sunspots <cit.>. Many detections in small magnetic flux tubes have also been reported <cit.>. Analytical works have determined the phase relations between the fluctuations produced by these wave modes in several observables <cit.>, including the V-B phase. Both studies predict a V-B phase with either 180^∘ or ± 90^∘ phase shifts, depending on the propagating/standing nature of the waves and the wave modes involved. None of these estimations can account for the 133^∘± 28^∘ V-B phase measured in our observations. In addition, the amplitude of the magnetic field fluctuations predicted for those eigenmodes under chromospheric conditions is significantly lower than the amplitude of the detected field fluctuations. These models were developed to support the interpretation of photospheric oscillations and do not account for some properties of the umbral chromosphere, such as the expansion of the magnetic field with height or the presence of non-linearities. Thus, the comparison with our data must be assessed with caution.
§ CONCLUSIONS
We have reported chromospheric magnetic field fluctuations in a sunspot umbra inferred from inversions of the HeI 10830 Å line. The large amplitude of these fluctuations, reaching LOS magnetic field strengths up to 2900 G, makes it challenging to associate them with chromospheric magnetism. Their magnetic field strength is comparable to that found in the underneath umbral photosphere. We interpret these fluctuations as the result of the opacity effect. Immediately after the shocks, the response of the HeI 10830 Å line to the magnetic field is shifted to lower atmospheric heights.
These magnetic field fluctuations show remarkable coherence and a well-defined phase shift with the velocity oscillations. This finding clearly indicates that they are driven by the slow magnetoacoustic waves that we detect in the velocity signal and discards the contribution from other wave modes (fast and Alfvén) as the origin of the magnetic field fluctuations. Also, from the examination of co-temporal UV data, we find no indications of coronal or transition region energetic events that could be impacting the lower atmosphere and driving the fluctuations.
Our observations open several questions regarding the formation of the HeI 10830 Å line. We find that, after the shocks produced by the upward propagation of magnetoacoustic waves, the response of the HeI 10830 Å triplet may come from atmospheric layers as low as the high photosphere. This interpretation is in agreement with the suggestion from several works that have analyzed the magnetic field inferred in neutral helium lines after flares or supersonic coronal downflows <cit.>. Here, we found that such energetic events are not required to produce striking fluctuations in the response height of the HeI 10830 Å triplet since they can also be produced by shocks associated with wave propagation. New observational analyses of temporal series are necessary to assess whether this is a common behavior or an uncommon occurrence due to some of the properties of the analyzed sunspot.
Financial support from grants PGC2018-097611-A-I00, PID2021-127487NB-I00, and PGC2018-102108-B-I00, funded by MCIN/AEI/ 10.13039/501100011033 and by “ERDF A way of making Europe” is gratefully acknowledged. TF acknowledges grant RYC2020-030307-I funded by MCIN/AEI/ 10.13039/501100011033 and by “ESF Investing in your future”. SJGM is grateful for the support of the European Research Council through the grant ERC-2017-CoG771310-PI2FA, the MCIN/AEI/ 10.13039/501100011033 and “ERDF A way of making Europe” through grant PGC2018-095832-B-I00, and the project VEGA 2/0048/20. The 1.5-meter GREGOR solar telescope was built by a German consortium under the leadership of the Leibniz-Institut für Sonnenphysik in Freiburg in Freiburg with the Leibniz-Institut für Astrophysik Potsdam, the Institut für Astrophysik Göttingen, and the Max-Planck-Institut für Sonnensystemforschung in Göttingen as partners, and with contributions by the Instituto de Astrofísica de Canarias and the Astronomical Institute of the Academy of Sciences of the Czech Republic. The redesign of the GREGOR AO and instrument distribution optics was carried out by KIS whose technical staff is gratefully acknowledged.
aa
|
http://arxiv.org/abs/2307.02918v1
|
20230706111303
|
Does personality affect the allocation of resources within households?
|
[
"Gastón P. Fernández"
] |
econ.GN
|
[
"econ.GN",
"q-fin.EC"
] |
1.25
same
f0mu
y0mu
p0mu
l0mu
j
*
§
*
§.§
[runin][]
.
=10000
decorations.pathmorphing
shapes.geometric, arrows
arrows,positioning,shapes,decorations.pathmorphing,snakes
yn/.style=draw,thick,rounded corners,fill=yellow!20,inner sep=.3cm,
bn/.style=draw,thick,rounded corners,fill=UBCblue!20,inner sep=.3cm,
on/.style=draw,thick,rounded corners,fill=UBCblue!20,inner sep=.3cm,
rn/.style=draw,thick,rounded corners,fill=UBCblue!20,inner sep=.3cm,
greenn/.style=draw,thick,rounded corners,fill=green!20,inner sep=.3cm,
grayn/.style=draw,thick,rounded corners,fill=gray!20,inner sep=.3cm,
to/.style=
->,>=stealth',shorten >=1pt,semithick,font=,
from/.style=
<-,>=stealth',shorten >=1pt,semithick,font=,
tofrom/.style=
<->,>=stealth',shorten >=1pt,semithick,font=,
every node/.style=align=center,
squig/.style=->,line join=round,decorate, decoration=zigzag,
segment length=8,amplitude=2,post=lineto,post length=2pt
theoremTheorem[section]
lemma[theorem]Lemma
proposition[theorem]Proposition
corollary[theorem]Corollary
definition[1][Definition]
#1
example[1][Example]
#1
remark[1][Remark]
#1
C>X
Does personality affect the allocation of resources within households?
[Version: August 1, 2023]
Gastón P. Fernández[Ph.D. student at the University of Leuven (KU Leuven), Department of Economics, Naamsestraat 69, box 3565, 3000 Leuven (e-mail: [email protected]). I deeply appreciate the invaluable guidance of my advisors Laurens Cherchye and Frederic Vermeulen. I would also like to thank Wietse Leleu and all participants at the Conference of the European Society for Population Economics (ESPE) in Belgrade, the Trans-Atlantic Doctoral Conference (TADC) in London, and the Public-Labor-Health Seminar, the Household Economics Gathering, and the ECORES Summer School in Leuven for their helpful comments. All errors are on my own.]
University of Leuven (KU Leuven)
===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
This paper examines whether personality influences the allocation of resources within households. To do so, I model households as couples who make Pareto-efficient allocations and divide resources according to a distribution function. Using a sample of Dutch couples from the LISS survey with detailed information on consumption, labor supply, and personality traits at the individual level, I find that personality affects intrahousehold allocations through two channels. Firstly, the level of these traits act as preference factors that shape individual tastes for consumed goods and leisure time. Secondly, by testing distribution factor proportionality and the exclusion restriction of a conditional demand system, I observe that differences in personality between spouses act as distribution factors. Specifically, these differences in personality impact the allocation of resources by affecting the bargaining process within households. For example, women who are relatively more conscientious, have higher self-esteem, and engage more cognitively than their male partners receive a larger share of intrafamily resources.
JEL Classification Numbers: D1, J12, J22, J24
Keywords:
Collective Household Model, Distribution Factors, Personality Traits.
§ INTRODUCTION
There is increasing evidence that personality traits matter for relevant life outcomes <cit.>. For instance, personality is associated with the formation of future cognitive skills <cit.> or with the educational and occupational choices over the life cycle <cit.>. Personality is also correlated with the probability of marriage and divorce <cit.> and is a relevant attribute on which individuals sort into the marriage market <cit.>. Nevertheless, much less is currently known about personality's impact on intrahousehold consumption patterns. For example, do personality traits affect the allocation of resources through their impact on individual preferences over goods? Or are there other mechanisms by which personality might shape the way couples decide over total resources? Is personality related to the distribution of power within households?
In this paper, I aim to empirically investigate the questions mentioned above by structurally testing the role of personality traits in resource allocation within households. I show that differences in personality traits between spouses play a significant role in shaping the distribution of resources within established households. Families are modeled as couples who make static decisions regarding private and public consumption and also allocate their time to the labor market. As a starting point, I assume that each adult household member has their own rational preferences. Additionally, I assume that couples make Pareto-efficient allocations and distribute resources among household members through an intrahousehold decision process <cit.>. By adopting this framework, I can test the concept of collective rationality, which refers to the collective model, using observed household allocations. This approach allows me to uncover relevant information underlying the consumption process. The main focus of this paper is to explore the hypothesis that personality traits may partially determine how couples divide resources. To investigate this, I test various theoretical restrictions of the collective model as formalized by bourguignon2009efficient. The collective framework not only enables the characterization of couples in terms of rational decisions but also allows for the integration of individual personality into a model of household consumption and labor supply.
This article contributes theory-based evidence about new channels that may explain consumption inequality within households. In the collective model, couples maximize a weighted sum of individual utilities, where the weights are referred to as Pareto weights. In the collective literature, when examining the impact of a specific variable on household behavior, a distinction is made between two channels: preference and distribution factors. Preference factors typically influence individual preferences for consumed commodities, while distribution factors specifically affect the decision-making process within the household through changes in the Pareto weights. In this sense, the level of a specific variable (e.g., years of schooling) is often considered as a preference factor and the relative amount of it (e.g., differences in education between partners) as a distribution factor. I leverage this notion and investigate both distribution factor proportionality and the exclusion restriction of a conditional demand system by utilizing differences in personality between spouses. I test the testable restrictions derived from collectively rational behavior and find no evidence to reject that differences in personality influence the bargaining process. Furthermore, I demonstrate that certain personality factors, such as differences in conscientiousness or self-esteem between spouses, are strongly associated with consumption inequality within the household. These findings provide valuable insights into the role of personality traits in shaping intrahousehold resource allocation dynamics.
Distribution factors, which influence household decisions without directly impacting preferences, have been extensively studied in the collective literature. These factors encompass a wide range of variables, including relative wages among spouses and the presence of divorce laws in relevant matching markets. For instance, browning1994income demonstrate that the intrahousehold allocation of resources is related to factors such as relative ages and relative incomes in consumption models. chiappori2002marriage extend earlier versions of the collective model and test their implications by introducing the local sex ratio and divorce laws as distribution factors in a labor supply model. In a nonparametric setting, cherchye2011revealed examine the relationship between the intrahousehold share of income and differences in age and educational level between spouses. Furthermore, exploiting exogenous variation from a randomized cash transfer program in Mexico, several studies have constructed distribution factors and tested the theoretical restrictions of the collective model (see bobonis2009allocation, attanasio2014efficient, de2022household).[See browning2014economics for a comprehensive review. ]
Building upon the collective framework and the existing applied research on the impact of personality, this paper contributes novel evidence suggesting that both intrahousehold rational behavior and consumption inequality are linked to the personality types of household members. While recent advancements in personality research have been extensively reviewed (see john2010handbook), the detailed examination of this particular issue is still relatively unexplored. In a related study, flinn2018personality develop a model of household behavior and apply it to Australian data to investigate how personality traits influence cooperative and non-cooperative interactions within households, as well as members' labor supply and wage rates. Their findings demonstrate that personality directly affects intrahousehold behavior and also indirectly impacts individual wages. The approach taken in the present paper differs from flinn2018personality. Instead of applying a behavioral model to the data, this study leverages a set of testable restrictions derived from bourguignon2009efficient, which serve as necessary and sufficient conditions for the collective model. By adopting this approach, the current study can structurally test the extent to which personality traits determine the allocation of resources between partners by influencing their respective bargaining positions within the household.
The rest of the paper unfolds as follows. Section 2 provides an introduction to the notation used and presents a collective model of household consumption and labor supply. This section also outlines the testable restrictions of the model based on observed household behavior, specifically focusing on distribution factor proportionality and the exclusion restriction of a conditional demand system. In Section 3, the sample used in the analysis is described, along with the available measures of personality traits. Section 4 outlines the empirical strategy employed in the study. It discusses how potential issues of multicollinearity in personality traits are addressed. Furthermore, it presents the functional form for the household demand functions and explains how tests of the collective model are derived from these functions. Section 5 presents the results obtained from testing the restrictions of the collective model. In Section 6, the relationship between intrahousehold consumption inequality and personality traits is discussed. Finally, Section 7 concludes the paper.
§ THEORY
The analysis considers households consisting of two adult members: the wife (f) and the husband (m). These individuals jointly make consumption decisions involving a Hicksian public good (C ∈ℝ_+), private Hicksian assignable goods for each member (c^i ∈ℝ_+), and individual leisure time (ℓ^i = T - L^i), where ℓ^i ∈ℝ_+ represents the amount of leisure time, T is the time endowment for each individual, and L is the time supplied to labor (i=m, f). It is assumed that children, if present, do not participate in the allocation of the household budget. The prices of all Hicksian goods are normalized to one and wages (w^i ∈ℝ_++) represent the prices of individual leisure.
The preferences of household members are captured by well-behaved utility functions. Each individual has an egoistic utility function denoted as u^i(c^i, ℓ^i, C; ξ). The utility function also depends on the vector ξ, which represents observed heterogeneity (i.e., taste shifters).
In the collective model of chiappori1988rational, chiappori1992collective, any Pareto-efficient intrahousehold allocation can be characterized as the solution of the following optimization program:
P1max_c^m,c^f, ℓ^m,ℓ^f,C [ u^m(c^m,ℓ^m, C;ξ)
+ μ(w^m,w^f, y,z) u^f(c^f,ℓ^f, C;ξ)]
s.t. c^m+c^f+C+w^mℓ^m+w^fℓ^f ≤ y,
c^i ≥ 0,
C ≥ 0,
T ≥ℓ^i ≥ 0,
where y is household full income defined by y= w^mT+w^fT + x with x∈ IR_+ the household nonlabor income, and μ∈ ]0, 1[ in the objective function is the Pareto weight that depends on (exogenous) wages, income, and distribution factors (z). A variation on elements of z could impact outside options of household members and thus their intrahousehold bargaining power (see vermeulen2002collective).[In axiomatic bargaining models, variables that are only applicable for threat points of the bargaining process can be potential distribution factors. See the discussion about extrahousehold environmetal parameters in mcelroy1990empirical and about bargaining models in browning2014economics.] I take both household composition and intrafamily allocation of power as exogenously given. The solution to (P1) implies a set of differentiable household demand functions for goods and leisure that depend on prices, full income, observed heterogeneity, and the distribution function:
g=g[w^m, w^f, y,μ(w^m, w^f,y,z);ξ] ∀g∈{c, ℓ, C}.
Distribution factor proportionality. As explained by bourguignon2009efficient, in a setting with no price variation distribution factor proportionality is necessary and sufficient for the collective model.[The first notions of the proportionality condition with only private consumption are introduced in bourguignon1993intra and browning1994income. bourguignon2009efficient extend these results for public goods and externalities in consumption.] This entails testing a set of cross-equation restrictions based on the estimation of the household demand system (1):
∂ c^m/∂ z_1/∂ c^m/∂ z_k= ∂ c^f/∂ z_1/∂ c^f/∂ z_k= ∂ℓ^m/∂ z_1/∂ℓ^m/∂ z_k =
∂ℓ^f/∂ z_1/∂ℓ^f/∂ z_k =
∂ C/∂ z_1/∂ C/∂ z_k ∀ k = 2, …, K.
The intuition of equation (2) is that distribution factors (z) only affect the intrahousehold allocation of consumption and leisure through their impact on the distribution function (μ). To see this, take the marginal change in distribution factor z_k on the household demand for commodity j:
∂ g_j/∂ z_k = ∂ g_j/∂μ∂μ/∂ z_k.
Comparing the effect of two distribution factors, z_k and z_l, we get:
∂ g_j/∂ z_k/∂ g_j/∂ z_l = ∂μ / ∂ z_k/∂μ / ∂ z_l,
where the right-hand-side term in equation (4) is independent of the demand for good j.
z-conditional demand system. An alternative demand system is the z-conditional system coined by bourguignon2009efficient. Under the assumption that distribution factor z_1, say, is strictly monotonic on commodity c^m, say, it is possible to invert the demand function for such good on this (continuous) factor:
z_1 = v(w^m, w^f,y, c^m,z_-1;ξ),
where z_-1 is equal to z but excluding the first element.[Appendix B provides evidence that supports monotonicity between male private consumption and one of the distribution factors presented in Section 4.] Substituting (5) into the demand for the remaining goods Φ(·), we get the z-conditional demand system for g̃ with g̃∈{ c^f, ℓ, C}:
g̃ = Φ(w^m, w^f,y,z;ξ),
= Φ[w^m, w^f,y,v(w^m, w^f,y, c^m,z_-1;ξ),z_-1;ξ],
= g̃(w^m, w^f,y,c^m,z_-1;ξ).
The restriction of the collective model based on the estimation of the (conditional) demand system in equation (6) states that subject to the conditioning good (c^m), the demand for the remaining goods should be independent of all other distribution factors. This translates into the following testable implication:
∂g̃(w^m, w^f,y,c^m,z_-1;ξ)/∂ z_k = 0 ∀ k = 2, …, K.
The restriction described in equation (7) implies that, conditional on the commodity used to invert z_1, additional distribution factors should not provide any meaningful additional information about the intrahousehold behavior. It is important to note that for this restriction to have empirical significance, it requires at least two distribution factors and at least two demand functions.
Although the testable implication in equation (7) is empirically more powerful than implication (2), which is used as a robustness check in the empirical application, both restrictions capture the same underlying mechanism.[See Proposition 2 in bourguignon2009efficient and the discussion thereof.] The intuition behind this is illustrated in Figure 1. Suppose we observe an optimal household demand function that is relatively more representative of m's preferences, such as g^0. Now, assume that we want to reallocate intrahousehold resources in a manner that is more favorable to the wife's (f) preferences, resulting in household decisions represented by g^1. The testable restrictions of the collective model inform us that variations in the distribution factors z would only impact such a reallocation of resources by shifting the individual bargaining weights (μ). In other words, distribution factors do not alter the Pareto frontier since they do not directly affect preferences or the budget constraint.
§ DATA
I use a sample of Dutch households obtained from the Dutch Longitudinal Internet Studies for the Social sciences (LISS) panel gathered by CentERdata. This dataset provides rich information on economic and sociodemographic variables. Crucially, it also collects detailed data on individual consumption and a set of member-specific personality scales.
The sample selection criteria for this study are as follows, similar to those used in other studies such as cherchye2017household and cherchye2012married. Couples included in the sample must have both adults between the ages of 25 and 65. Both adults in the couple must participate in the labor market for at least 10 hours per week, as wage information is required. Couples with at least one self-employed adult are excluded from the sample. This is because obtaining wage information for self-employed individuals is more complex compared to salaried workers. The sample includes only couples with no additional household members apart from children residing in the household. For example, couples living with friends or parents are excluded. Due to significant imbalance issues in the panel structure of the data, the study does not make use of the panel structure and treats the data as a pooled cross-section. Overall, the sample consists of 1130 couples pooled from five different years, ranging from 2009 to 2015.
Table 1 provides summary statistics for the main variables used in the analysis. All economic variables are in weekly real terms. Full income is defined as the sum of spouses' wages multiplied by the total time available (i.e., 112) plus any non-labor income of the household. Leisure for each partner is derived by subtracting the hours worked by each individual from the total available time. The dataset includes information on assignable consumption for each household member. This refers to individual expenditures on various goods such as food, tobacco, or clothing. In the empirical analysis, these individual expenditures are treated as a Hicksian aggregate commodity. Total household private consumption represents the sum of both spouses' total private consumption, including their individual assignable consumption. Household consumption is calculated as the sum of public consumption and assignable private consumption. Public expenses, such as mortgage payments, are considered as a Hicksian aggregate commodity. As shown in Table 1, females work fewer hours and have lower wages compared to males. In terms of assignable consumption, females spend slightly more per week than males. The majority of total household consumption comes from public expenses. Females allocate more time to leisure activities than males, although a detailed breakdown of non-labor time is not available.[Data about the individual time allocated to household chores is only available in three waves.] Demographically, males are slightly older and have a higher educational level compared to females.
The spouses' personality traits in this study are measured using three different sources. The first source is Rosenberg's Self-Esteem Scale <cit.>, which assesses individuals' perceptions of their self-worth. The second source is the Need For Cognition Scale <cit.>, which serves as a proxy for an individual's inclination to engage in intellectual activities. The third source is the Big Five Personality Traits questionnaire <cit.>, which captures personalities based on five overarching dimensions.[To construct each personality measure, I consider items with high loading values from exploratory factor analysis as in flinn2018personality and todd2020dynamic. These personality measures demonstrate high internal consistency, as indicated by Cronbach's alphas exceeding 0.7.] Out of the total 1130 couples in the sample, valid information on personality traits is available for 583 couples. For households with missing personality information, the values are imputed by averaging observed individual personality scores from other waves. This imputation approach takes into account the stability of personality traits over time, which has been suggested by previous studies.[See, e.g., cobb2012stability, todd2020dynamic or fitzenberger2022personality. See appendix A for the stability of personality traits in the current sample.] I test various imputation methods, such as using the median value, but the main results remain robust. Looking at the bottom of Table 1, on average, males tend to have higher values than females in measures of self-esteem, extraversion, and cognitive engagement. In contrast, females tend to score higher than males in conscientiousness, neuroticism, and agreeableness. Both males and females exhibit similar levels of openness. These gender differences in personality traits align with findings from previous studies conducted on Dutch samples (see, e.g., nyhus2005effects or dupuy2014personality). Importantly, the gender differences in personality traits observed in the sample remain virtually unchanged even after the imputation of missing personality traits.
§ EMPIRICAL STRATEGY
In this section, I discuss the measures of relative personality traits that are employed to examine the restrictions of the collective model outlined in Section 2. These relative personality traits capture the differences in personality between spouses within a household. The functional form for the household demand functions is also introduced. From these demand functions, several testable implications can be derived to assess the validity of the collective model.
Multicollinearity in personality traits. To address the issue of multicollinearity arising from the seven measures of personality traits, I employ principal component (PC) analysis. This analysis is applied to the entire sample, which includes both women and men. The goal is to identify the principal components that explain the majority of the variance in the observed personality measures. By extracting these principal components, which are linearly uncorrelated factors, the paper aims to reduce the dimensionality of the personality traits and mitigate the multicollinearity problem. This approach allows for a more precise estimation of the effects of personality traits on intrahousehold consumption behavior <cit.>.
Table 2 presents the correlations between the principal components (PCs) and the individual personality measures, as well as the eigenvalues and the share of observed variance explained by each PC. The results indicate that the two principal components capture distinct aspects of personality traits. PC1 is associated with traits such as introversion, lower self-esteem, and lower cognitive engagement. On the other hand, PC2 is characterized by higher levels of neuroticism and conscientiousness. The eigenvalues and the proportion of observed variance explained by each PC reflect their relative importance in explaining the variability in the original personality measures.
For each couple in the sample, the relative endowment of personality traits between partners is calculated by constructing the ratio of spouses' principal components (PCs). These ratios, representing the relative distribution of personality traits, are treated as continuous measures and tested as distribution factors in the collective consumption model presented in Section 2. To facilitate comparison and analysis, the PCs are scaled from 1 to 100, considering that they can take negative values. Figure 2 displays the distribution of these ratios. On average, women tend to have higher values for the common personality factor represented by PC1 compared to men. In contrast, men exhibit higher values for PC2 relative to women.
Parametrization of unconditional demand functions. To test the restrictions of the collective model, a functional form for the household demand functions needs to be specified. I follow bobonis2009allocation and parametrize the unconditional demand functions g∈{c,ℓ,C} in budget share form as:
ω_j = α_j +ln(z')β + a_j(y) +b_j(y^2) + ln(w')λ + e'δ +m'ψ + τ_j + ε_j ,
where for each couple in the sample, ω is the budget share on good j, a and b are functions of full income and its square, w is a vector of partners' wages, τ are time dummies capturing heterogeneity over time, and ε is unobserved heterogeneity.[Potential sources of endogeneity for full income are measurement error in nonlabor income, taste shocks to total consumption that could be correlated to unobserved heterogeneity in the budget shares equations, or saving decisions that may be driving changes in nonlabor income.] Prices of composite goods, which are normalized to one, are assumed to enter through τ. The vector z includes the relative endowment of personality traits, i.e., ratios of PCs between partners of a household. The additional controls e and m are detailed below.[The assumption of a linear-log functional form allows for a straightforward interpretation of the coefficient estimates in the empirical model. Additionally, the empirical results remain consistent regardless of the specific functional form assumption chosen (results can be provided upon request).]
One potential source of endogeneity in equation (8) is the endogenous selection of couples in the marriage market, wherein individuals may form couples based on their respective personality traits. Despite the limitations of the current dataset, I address this potential issue in two ways.[Fully addressing selection in personality traits, such as through the estimation of a structural matching model, is beyond the scope of this paper.] First, the vector of taste shifters (e) includes, among other explanatory variables, the logarithm of the level of principal components (PCs) of each spouse and their squares. I include the squares of the PCs to accommodate for potential nonlinearity in the influence of personality on preferences over commodities, as suggested in the analysis of borghans2008economics. Second, in all specifications, I incorporate the vector m to account for marriage market conditions with respect to personality, as discussed in dupuy2014personality. This vector incorporates the weighted ratios of the number of husbands and wives who are of similar age and educational level and who have the same score in a given personality trait as the husband or wife of each household, divided by the corresponding number of husbands or wives. These ratios, referred to as personality ratios, are akin to the sex ratio concept in chiappori2002marriage and serve to control for the underlying structure of the marriage market in the sample with respect to personality traits.
The proportionality restriction imposed by collective rationality (as expressed in equation (2)) on the system of unconditional demand functions can be formulated as follows:
∂ω_j/∂ln(z_1)/∂ω_j/∂ln(z_2) = ∂ω_s/∂ln(z_1)/∂ω_s/∂ln(z_2),
β_j1/β_j2= β_s1/β_s2
for all goods j, s, with j ≠ s. If condition (9) is satisfied, it implies that there is no evidence to reject the hypothesis that the effects of differences in personality traits between partners on resource allocation occur solely through their influence on the household's distribution function.
To test the nonlinear cross-equation restrictions presented in equation (9), the model is estimated as a system, allowing for correlation between the error terms across the budget shares equations. The cross-equation hypotheses are then examined using Wald test formulations. It is important to note that these formulations may be subject to statistical issues. For instance, in OLS systems, Wald tests tend to over reject the null hypothesis, and they are not invariant to the definition of the null hypothesis (see greene2003econometric). To address these concerns, this study adopts a similar approach to that of bobonis2009allocation. Firstly, the Wald tests are conducted using the bootstrap distribution with 1000 replications. Secondly, as a robustness check of the main results, linear Wald tests are computed based on the estimation of the z-conditional demand system proposed by bourguignon2009efficient.
Parametrization of the z-conditional demand system. Under the additional assumption that one distribution factor is strictly monotone in one good, we can derive the demand for that good as a function of the distribution factor. In my analysis, I find suggestive evidence indicating the presence of a monotonic correlation between factor z_2 = PC_2^f/PC_2^m and male private consumption (c^m).[Refer to appendix B for detailed evidence on the monotonicity assumption. It is important to note that for the collective test based on the conditional demand system presented in this section, z_2 needs to be both continuous and statistically significant. For further discussion on this topic, see de2022household.]
In budget share form, the demand for male private consumption (c^m) inverted on z_2 is given by:
ln(z_2) = 1/β_c^m2[ω_c^m - α_c^m - β_c^m 1ln(z_1)-a_c^m(y)-b_c^m(y^2)
- ln(w')λ_c^m - e'δ_c^m - m'ψ_c^m - τ_c^m - ε_c^m].
Substituting equation (10) in g̃(w^m, w^f,y,c^m,z_-2;ξ), the demand for the remaining goods, we obtain the z-conditional demand system:
ω_s = φ_s + θ_sln(z_1) + a_s(y)+b_s(y^2) + β_s2/β_c^m2ω_c^m
- β_s2/β_c^m2[a_c^m(y)+a_c^m(y^2) +ln(w')λ_c^m + e'δ_c^m + m'ψ_c^m + τ_c^m] + ζ_s,
where
φ_s = α_s - α_c^mβ_s2/β_c^m2,
θ_s = β_s1-β_c^m1β_s2β_c^m2,
ζ_s = β_s2/β_c^m2ε_c^m + ε_s
for all goods s ≠ c^m. One important source of endogeneity that arises from the estimation of (11), is the fact that the share of male private consumption is not independent of the new compound error term ζ_s. A natural instrument for men's consumption is z_2 which satisfies the standard requirements for being a relevant and valid instrumental variable. It is worth noting that equation (10) demonstrates the correlation between ω_c^m and z_2, while the latter is excluded from equation (11). To mitigate this endogeneity problem, I employ a control function approach by incorporating the residuals from the first stage of the conditioning good into the estimation of equation (11).[Control functions for testing collective rationality are also used by bobonis2009allocation, attanasio2014efficient, de2022household.]
The exclusion restriction imposed by the collective model, as inferred from the estimation of the z-conditional demand system in equation (11), can be stated as follows:
∂ω_s/∂ln (z_1) =θ_s = 0 ∀ s ≠c^m.
For each budget share equation in the system (11), a linear test is conducted to assess the significance of the parameter estimate of the relative personality factor. Restriction (12) indicates that once we condition the demand for the remaining goods on the demand for c^m, which is monotonically related to z_2, the additional variation provided by z_1 does not play a significant role in determining the household equilibrium. This condition is equivalent to the requirement of distribution factor proportionality, as discussed in bourguignon2009efficient. The exclusion restriction stated in equation (12) carries greater empirical power compared to the cross-equation restrictions presented in (9). This observation further strengthens the robustness of the estimation results obtained for the unconditional demand system.
§ EMPIRICAL RESULTS
I estimate the unconditional demand system in equation (8) using ordinary least squares (OLS), while the z-conditional demand system in equation (11) is estimated using a control function approach. In the control function approach, I incorporate the residuals obtained from a first-stage regression of male private consumption into the demand for the other commodities. To account for heteroskedasticity, I use robust standard errors, and I cluster the standard errors at the household level in all specifications.
Table 3 presents the estimates of the system of unconditional demand equations. The specifications include several control variables: a linear control function for household full income and its square, instrumented with household potential income; the logarithm of spouses' wages and the interaction between them; the square of the husband's wage; husband's age and its square; husband's educational level; the number of children in the couple; a dummy variable indicating whether the couple is married or cohabiting; and time dummies.[Wife's age and educational level are not included in the specifications due to multicollinearity issues, as there is a significant positive assortative mating in age and education in the sample. However, the results remain robust when using the wife's characteristics as controls instead.] Additionally, I include the logarithm of the principal components in levels and their squares as additional taste shifters in the specifications. The marriage market personality ratios are also included in the analysis.
Firstly, it is observed that the relative endowments of personality between spouses have a significant impact on male private consumption, leisure time, and public expenditures. Both personality factors positively affect private and public consumption, but negatively influence the allocation of leisure. Secondly, the second distribution factor, which is associated with conscientiousness and neuroticism, has a larger average effect compared to the first distribution factor. Thirdly, the ratios of the estimated coefficients of the distribution factors across commodities, as indicated in equation (9), are 0.37 for c^m, 0.17 for c^f, 0.16 for l^m, 0.37 for l^f, and 0.31 for C. These proportional average effects across commodities are supported by the results of the (bootstrapped) proportionality test presented at the bottom of Table 3. This evidence suggests that relative personality influences an individual's consumption within a partnership, but solely through its impact on the distribution of power within the household. Finally, personality also directly affects the allocation of resources through its impact on preferences, as evidenced by the significant effect of the principal components in levels on commodities. Due to multicollinearity, the estimated coefficients of the wife's principal components are not included in the analysis.
Table 4 presents the estimates of the z-conditional demand functions based on equation (11). The same control variables are used as in the unconditional demand equations. It should be noted that the conditioning good is c^m, and the relative level of PC2 is employed to invert the demand for this good. Importantly, both personality factors have a significant impact on the budget share equation of c^m. The most compelling evidence is obtained from estimations where the budget share equation is responsive to both factors <cit.>. Additionally, the relative level of PC2 is statistically significant in four out of five budget share equations. In the unconditional demand system (Table 3), the relative amount of PC1 is significant for two commodities (male private consumption and female leisure). However, in the z-conditional demand system (Table 4), it is not significant in any case. This evidence suggests that the impact of relative personality is indeed one-dimensional, meaning that relevant information regarding the intrahousehold allocation of resources is completely summarized by the share of male private consumption. Crucially, this finding is confirmed by the result of the collective test at the bottom of Table 4.
§ PERSONALITY AND INTRAHOUSEHOLD CONSUMPTION INEQUALITY
After providing theory-based evidence that (relative) personality affects the bargaining weights of household members, it is important to explore the relationship between personality and within-family inequality. Following the approach of cherchye2020marital, I analyze intrahousehold consumption inequality using the women and men relative individual cost of equivalent bundle (RICEB). For a given couple, these bundles are defined as follows:
RICEB^i = c^i + w^iℓ^i+C/y with i ∈{m,f}.
Member-specific RICEBs describe how household members allocate consumption relative to the household's full income, taking into account both scale economies and the intrahousehold division of resources, thus providing an assessment of individual welfare.[It is worth noting that while the concept of RICEBs is related to the sharing rule concept in the collective literature, the RICEBs evaluate public expenditures at market prices instead of Lindahl prices. bostyn2022time utilize RICEBs to analyze individual welfare in a collective model that incorporates marriage market restrictions.] In this study, intrahousehold consumption inequality is proxied by the difference between partners' RICEBs, specifically RICEB^f minus RICEB^m.
Next, I examine the distribution of intrahousehold consumption inequality for three categories of couples: (a) households where the female fraction of a specific personality trait is above the 80th percentile of the distribution of all female fractions; (b) households where the female fraction of a specific personality trait is between the 45th and 55th percentiles of the distribution of all female fractions; and (c) households where the female fraction of a specific personality trait is below the 20th percentile of the distribution of all female fractions. This categorization allows for a comparison between households where the within-household female personality fraction is either high, moderate, or relatively low. It is important to note that the female personality fraction (r_p) for personality trait p is constructed as r_p = p^f/(p^f+p^m).[Appendix C provides a detailed overview of the distribution of these female personality fractions as well as the RICEB measures.]
Figure 2 illustrates how intrahousehold consumption inequality varies with the relative amount of personality within couples for each personality measure, comparing the three types of households mentioned above. First, it can be observed that couples with a moderate within-family difference in personality tend to exhibit, on average, a smaller degree of intrahousehold consumption inequality (indicated by the red dashed lines, which are more concentrated around zero on the horizontal axis). Second, for almost all personality measures (with the exceptions of openness and neuroticism), the black solid line is consistently positioned to the right of the blue dash-dotted line. This implies that a larger fraction of a woman's personality relative to her partner is associated with a greater allocation of intrahousehold resources towards her. This pattern is particularly pronounced for conscientiousness, self-esteem, and cognitive engagement.[Note that self-esteem and conscientiousness exhibited the highest loadings among the personality traits in the principal components analysis (see Table 2).] Indeed, in those three cases, as demonstrated in Panel A of Table 5, I strongly reject the null hypothesis of equal means between couples with a large and small female personality fraction (referring to the black and blue distributions in Figure 2). In Panel B of Table 5, I present the difference in average intrahousehold consumption inequality between households with large and small personality fractions in the sample. For instance, in couples where women exhibit higher levels of cognitive engagement than their male partners, there is an average of 4.48% more intrahousehold resources allocated to them compared to couples where men are more engaged in intellectual activities.
§ CONCLUSION
This paper presents compelling evidence, based on theoretical foundations, regarding the role of personality in resource allocation within households when assuming Pareto-efficient decision-making. By examining variations in personality traits among Dutch couples, this study tests for distribution factor proportionality and the exclusion restriction utilizing a conditional demand system estimation. The findings do not allow for the rejection of the hypothesis that (relative) personality influences the bargaining process within households. Notably, women who exhibit higher levels of conscientiousness, self-esteem, and cognitive engagement relative to their male partners tend to receive a larger proportion of intrafamily resources. To address potential selection bias in personality, the budget share equations are conditioned on the level of personality and additional explanatory variables that capture the structure of the marriage market in relation to personality traits within the sample.
The findings presented in this paper provide strong support for conducting a more comprehensive and structural analysis to explore the significance of personality traits within the family context, as well as the underlying mechanisms through which these traits exert their influence. Firstly, employing a model with a more robust parametric structure for preferences and the sharing rule, similar to approaches utilized by browning2013estimating or cherchye2017household, would offer deeper insights into the welfare implications of personality traits. Such an approach could enhance our understanding of how these traits affect individual well-being. Secondly, it is worth noting that several studies have demonstrated the importance of personality traits in assortative mating within the marriage market (lundberg2012personality or dupuy2014personality). Therefore, it would be valuable to estimate a matching model and examine the complete structure of the marriage market as a potential driver of power dynamics. This would allow for a comprehensive assessment of how personality traits shape partner selection and subsequent resource allocation within households. Lastly, the current paper's framework overlooks intertemporal aspects that are relevant to household consumption, such as the influence of personality on occupational or educational choices (todd2020dynamic). Considering these factors in future research would enhance the richness and applicability of the analysis.
§ APPENDIX
§ STABILITY OF PERSONALITY TRAITS
This section illustrates the evolution of personality over time for women and men in our sample. Figures A1 and A2 show the average score by age for each personality measure. I consider all waves together.
§ MONOTONIC RELATIONSHIP BETWEEN Z_2 AND MALE PRIVATE CONSUMPTION (Ω_C^M)
Following attanasio2014efficient, I study the relationship between the second distribution factor (z_2 = ln(PC2^f/PC2^m)) and the share of male private consumption (ω_c^m) by looking at the point estimates of different polynomials. The direction of the point estimates implies an increasing relationship between the share of men's private expenditures and the second measure of relative personality within households. This information, together with the fact that both distribution factor influence significantly ω_c^m (see Table 3), support the choice of men's private expenditures as the conditioning good.
§ DISTRIBUTION OF FEMALE PERSONALITY FRACTIONS AND RICEBS
apacite
|
http://arxiv.org/abs/2307.00816v1
|
20230703075645
|
Index of the Kontsevich-Zorich monodromy of origamis in $\mathcal{H}(2)$
|
[
"Pascal Kattler"
] |
math.GT
|
[
"math.GT",
"math.DS"
] |
Index of the Kontsevich-Zorich monodromy of origamis in ℋ(2)
Pascal Kattler
August 1, 2023
============================================================
The Kontsevich-Zorich monodromy of an origami is the image of the action of the Veech group on the non tautological part of the homology. In this paper we make some progress to show, that for origamis in the stratum ℋ(2) the index of the Kontsevich-Zorich monodromy in SL_2(ℤ) is either 1 or 3.
§ INTRODUCTION
In this article we show most parts of a conjecture from <cit.>, namely that the index of the Kontsevich-Zorich monodromy of primitive origamis of degree d in the stratum ℋ(2) is either 1 or 3 in _2(ℤ). Hubert and Lelièvre showed in <cit.>, that there are two _2(ℤ) orbits 𝒜_d and ℬ_d if the degree d is odd and one _2(ℤ)-orbit, if the degree is even, distinguished by their HLK-invariant. Furthermore each orbit has as representative an L-origamis L(n,m). This is an L-shaped origami as in Figure ZylinderDecom2, where opposite edges are glued, n is the number of squares in horizontal direction and m in vertical direction. In the even case a representative is L(n, m), where n is even and m is odd (or reserved). In the odd case representatives are L(n,m), where m and n are even for 𝒜_d and L(n,m), where m and n are odd for ℬ_d. So we will show the following Theorem.
Let 𝒪 be an origami of degree d and genus 2 and Γ⊆_2(ℤ) the Kontsevich-Zorich monodromy of 𝒪.
* The index of Γ in _2(ℤ) is at most 3, if d is even.
* The index of Γ in _2(ℤ) is 1, if d is odd and 𝒪 lies in 𝒜_d.
We proceed as follows. We choose for each degree and for each orbit an L-origami 𝒪 as representative. Then we take two directions and their corresponding Dehn multitwists, which are elements of the Veech group of 𝒪, as in <cit.> (Proposition 2.4). Finally we compute the actions of this Dehn multitwists on the non tautological part of the homology and show that the indices of the groups, generated by them, is 1 or 3.
The following statements still miss to complete the proof of the entire conjecture: In the index 3 case, we just showed, that the index is at most 3. Also the in the even case, the following conjecture still misses.
In the setting of <Ref>, the index of Γ in _2(ℤ) is 3, if d is odd and 𝒪 lies in ℬ_d.
§.§ Acknowledgments
I am grateful for the support provided by my supervisor Gabriela Weitze-Schmithüsen throughout working on this paper. This work was founded by the Project-ID 286237555—TRR 195—by the Deutsche Forschungsgemeinschaft (DFG, German Research
Foundation).
§ COMPUTATIONS OF THE INDICES
§.§ The odd case and the orbit 𝒜_d
We first show that the index of the Kontsevich-Zorich monodromy of the L-origami L(2, 2n) for n ∈ℕ (see picture below) is 1.
In fact we show that the Kontsevich-Zorich monodromy is generated by the action of the Dehn multitwists of the cylinders in directions (0,1) and (n , n+1) on the homology. We show, that this action is given by the matrices [ 1 -1; 0 1 ] and [ 2 -1; 1 0 ], with respect to a given basis of the homology. These matrices generate _2(ℤ). The cylinder decomposition of the direction (n, n+1) is as shown in Figure ZylinderDecom2 and Figure ZylinderDecom3:
We get always three saddle connections in the direction (n, n+1). The green saddle connection is the one, starting in the left bottom corner of the rightmost square. The two red saddle connections r_1 and r_2 are the other ones. See <Ref> for the proofs of the statements presented in the following.
We call
(i) Θ_g the green cylinder. That is the cylinder with the green saddle connection as the upper boundary.
(ii) Θ_r the red cylinder. That is the cylinder with the red saddle connections as the upper boundary.
(iii) Y_1 is the small vertical cylinder.
(iv) Y_2 is the large vertical cylinder.
Note that we use the name of the cylinders also as name for their mid curves, which we consider as elements of the homology.
Now we remind of the definition of the combinatorial length (or width).
The combinatorial length of a cylinder of an origami π𝒪→ E with mid curve γ is the multiplicity of the curve π∘γ, i.e. #{t ∈ (0,1], π(γ(t)) = π(γ(0))}.
Note that the definition of the combinatorial length is equivalent to the definition of the combinatorial width from <cit.>.
The former cylinders have the combinatorial length f_Θ_r = 2n-1, f_Θ_g = 2, f_Y_2 = 2n, f_Y_1 = 1 and the combinatorial heights (this are the c_i defined below) of these cylinders is 1, since the cylinders Θ_g and Θ_r have the same height, the same holds for Y_1 and Y_2.
The cylinder decomposition in direction (n, n+1) defines (up to inverse) a unique affine map, which is a minimal Dehn multitwists along the core curves of these cylinders. Let D_Θ be the Dehn multitwist in direction (n, n+1) and analogously let D_Y be the Dehn multitwist in direction (0, 1).
We take as basis of the homology the horizontal and vertical mid curves X_1, X_2, Y_1, Y_2 as in Figure homology and as basis of the non-tautological part
X = X_2 - 2 X_1
Y = Y_2 - 2nY_1
Let D be the Dehn multitwist in a two cylinder direction with mid cylinders γ_1 and γ_2.
We compute the action D_∗ of the Dehn multitwist D in a on the non-tautological part of the homology with the formula
D_∗ = + c_1 f_2 Ω(·, γ_1 )γ_1 + c_2 f_1 Ω(·, γ_2 )γ_2
from <cit.> (Chapter 2.4). The f_i are the combinatorial lengths of γ_i and c_i are the smallest integers, such that the c_1 h(γ_2) = c_2 h(γ_1)), where h(γ_i) is the height of γ_i.
In Figure intersectionNumberEven we summarize the needed intersection numbers. Note that the sign of the intersection number Ω(s,t) of curves s and t in direction v and w is the sign of the determinant of the matrix [ v w ].
We compute the not obvious intersection numbers in <Ref>.
Now we compute the cylinder middles Θ_r and Θ_g as linear combination of our chosen basis (X_1, X_2, Y_1, Y_2) of the homology:
Θ_r = a X_1 + bX_2 + cY_1 + dY_2
Let
A =[ 0 0 0 1; 0 0 1 1; 0 -1 0 0; -1 -1 0 0; ]
be the fundamental matrix of the intersection form with respect to the basis of the homology. Because of the bilinearity, we get x =(a,b,c,d) is the unique solution of the equation Ax = (Ω(Θ_r, X_1), Ω(Θ_r, X_2), Ω(Θ_r, Y_1), Ω(Θ_r, Y_2))^t. The same holds for Θ_g, hence we get
Θ_r =(n-1)X_2 +((2n-3)n +2)X_1 + nY_2 +(n-1)Y_1 and
Θ_g = X_2 + 2(n-1)X_1 +Y_2 +2Y_1.
It follows
D_Θ(X) = X + c_Θ_gf_Θ_rΩ(X, Θ_g)Θ_g
+ c_Θ_rf_Θ_gΩ(X, Θ_r)Θ_r
= X + (2n-1)Θ_g - 2Θ_r
= X + (2n-1)X_2 + 2(2n-1)(n-1)X_1 + (2n-1)Y_2 + (2n-1)2Y_1
-2(n-1)X_2 -2((2n-3)n +2)X_1 - 2nY_2 - 2(n-1)Y_1
= X + X_2 - 2X_1 + 2nY_1 -Y_2 = 2X -Y
D_Θ(Y) = Y + c_Θ_gf_Θ_rΩ(Y, Θ_g)Θ_g
+ c_Θ_rf_Θ_gΩ(Y, Θ_r)Θ_r
= Y - (n-1)Θ_g + 2Θ_r
= Y + (2n-1)X_2 + 2(2n-1)^2X_1 + (2n-1)Y_2 + (2n-1)2Y_1
-2(n-1)X_2 -2((2n-3)n +2)X_1 - 2nY_2 - 2(n-1)Y_1
= Y + X_2 - 2X_1 + 2nY_1 - Y_2 = X
and
D_Y( X ) = X + c_Y_2f_Y_1Ω(X, Y_2)Y_2
+ c_Y_1f_Y_2Ω(X, Y_1)Y_1
= X - Y_2 + 2nY_1 = X-Y
D_Y(Y) = Y
So we have
D_Θ = [ 2 1; -1 0 ] and
D_Y =
[ 1 0; -1 1 ]
§.§ The even case
In this chapter, we treat the origamis in ℋ(2) of even degree. In this case we choose as representatives L(2, 2n+1), n ∈ℕ. We show that the index of the Kontsevich-Zorich monodromy in _2(ℤ) is at most three. In fact we show that the group generated by the action of the Dehn multitwists in directions (2n+2, 2n+1) and (2n+1, 2n+3) is the index 3 subgroup generated by the matrices [ 3 2; -2 -1 ]
and
[ 1 0; -1 1 ].
We get three saddle connections in direction (2n+1, 2n+3) (see Figure ZylinderdecomPsi1 and Figure ZylinderdecomPsi2). The green saddle connection g is that, starting in the left bottom corner of the rightmost square. The two red saddle connections r_1 and r_2 are the other ones.
And we get three saddle connections in direction (2n+2, 2n+1) (see Figure ZylinderdecomTheta1 and Figure ZylinderdecomTheta2). The blue saddle connection b is that, starting in the left bottom corner of the second square from the bottom. The two magenta saddle connections m_1 and m_2 are the other ones.
We call
(i) Ψ_g the green cylinder. That is the cylinder in direction (2n+1, 2n+3 ) with the green saddle connection as the upper boundary.
(ii) Ψ_r the red cylinder. That is the cylinder in direction (2n+1, 2n+3 ) with the red saddle connections as the upper boundary.
(iii) Θ_m the magenta cylinder. That is the cylinder in direction (2n+2, 2n+1) with the magenta saddle connections as the upper boundary.
(iv) Θ_b the blue cylinder. That is the cylinder in direction (2n+2, 2n+1) with the blue saddle connection as the upper boundary.
These cylinders have the combinatorial length f_Ψ_r = 2n, f_Ψ_g = 1, f_Θ_m = 2n+1, f_Θ_b = 1. In table intersectionNumberOdd we computed all necessary intersection numbers. Note that c_Ψ_r= c_Θ_m = c_Θ_b = 1 and c_Ψ_g = 2.
It holds
Θ_m = (2n+1)X_2 + (2n+1)2nX_1 + 2nY_2 + (2n+1)Y_1
Θ_b = X_2 + 2nX_1 + Y_2
Ψ_r = (2n-1)X_2 + (2(n-1)(2n+1) + 4) X_1 + (2n+1)Y_2 + (2n-1)Y_1
Ψ_g = X_2 + (2n-1)X_1 + Y_2 + 2Y_1
and
D_Ψ( X ) = X + c_Ψ_rf_Ψ_gΩ(X, Ψ_r)Ψ_r
+ c_Ψ_gf_Ψ_rΩ(X, Ψ_g)Ψ_g
= X -2 Ψ_r + 2·2nΨ_g
= X + 4nX_2 + 4n(2n-1)X_1 + 4nY_2 + 8nY_1
-2(2n-1)X_2 -2(2(n-1)(2n+1)+4)X_1 - 2(2n+1)Y_2- 2(2n-1)Y_1
= X + 2X_2 -4X_1 -2Y_2 + (4n+2)Y_1 = 3X -2Y
D_Ψ( Y ) = Y + c_Ψ_rf_Ψ_gΩ(Y, Ψ_r)Ψ_r
+ c_Ψ_gf_Ψ_rΩ(Y, Ψ_g)Ψ_g
= Y - 2Ψ_r + 4nΨ_g
= 2X- Y
and
D_Θ(X) = X + c_Θ_bf_Θ_mΩ(X, Θ_b)Θ_b
+ c_Θ_mf_Θ_bΩ(X, Θ_m)Θ_m
= X - (2n+1)Θ_b + Θ_m
= X - (2n+1)X_2 - (2n+1)2nX_1 - (2n+1)Y_2
+ (2n+1)X_2 + (2n+1)2nX_1 + 2nY_2 + (2n+1)Y_1
= X - Y_2 + (2n+1)Y_1 = X-Y
D_Θ(X) = Y + c_Θ_bf_Θ_mΩ(Y, Θ_b)Θ_b
+ c_Θ_mf_Θ_bΩ(Y, Θ_m)Θ_m= Y
So we have
D_Ψ = [ 3 2; -2 -1 ] and
D_Θ =
[ 1 0; -1 1 ]
§ INTERSECTION NUMBER AND COMBINATORIAL LENGTH
Let 𝒪 = L-Origami(2, 2n) = O((1… 2n), (1 (2n+1))) be an origami and π𝒪→ E the corresponding covering of the standard torus E = ℂ / ℤ^2 (see e.g. Figure ZylinderDecom2). Let σ be the cycle (2… 2n 1 (2n+1)). In order to compute the intersection numbers, we introduce some special lattice points. This was inspired by <cit.>.
* An (n, n+1)-lattice point is a point x ∈𝒪, such that π(x) =(a/n+1, b/n) with a, b ∈ℤ.
* A horizontal (n, n+1)-lattice point is a point x ∈𝒪, such that π(x) =(a/n+1, 0) with a ∈ℤ.
* A vertical (n, n+1)-lattice point is a point x ∈𝒪, such that π(x) =(0, a/n) with a ∈ℤ.
We notice that the geodesic line in direction (n, n+1) through an (n, n+1)-lattice point meets again an (n, n+1)-lattice point, whence it meets the horizontal edge of a square.
Let x be a horizontal (n, n+1)-lattice point at the lower edge of square i, which is no singularity and let γ be the geodesic line through x in direction (n, n+1).
(a) Let y be the point of γ, where it meets the lower edge of a square the next time, in which y is no singularity. Then y lies at the lower edge of square σ(i).
(b) If π(x) = (a/n+1, 0 ), then γ meets the point y with π(y) = (a/n+1, 0 ) and y lies at the upper edge of square σ^(n+1)(i), if it meets no singularity.
(a) Let 2≤ i ≤ 2n. Then γ leaves square i, when it reaches the upper edge. This is the lower edge of square σ(i). If i=1 then the first coordinate of π(x)> 1/n+1, since x and y are no singularities. Then γ leaves square 1 over the right edge and reaches the lower edge of square 2n+1 = σ(1). The case i =2n+1 is treated similarly.
(b) If π(x) = (a/n+1, 0) then π(γ(t) = (a-1/n+1 1, 0), when γ meets the edge of a square the next time. Then we can apply part (a) n+1 times.
Each geodesic line γ through an (n, n+1)-lattice point x in direction (n, n+1) defines a saddle connection.
Let x = γ(0) be no singularity. We can assume, that x is a horizontal (n, n+1)-lattice point at the lower edge of some square i, because γ meets one, the next time it crosses an edge of a square. If π(x) = (a/n+1, 0) then then π(γ(t)) = (a-1/n+1 1, 0), when γ meets the edge of a square the next time. So we can assume, that x lies over (0,0).
The numbers n+1 and 2n + 1 = n + (n+1) are coprime. (A common divisor of n+1 and 2n +1 would be a common divisor of n+1 and n = (2n +1 - (n+1)).) So we get integers a, b ∈ℤ with a (n+1) + b(2n+1) = 1. So by <Ref> either γ meets the lower left point of square σ^a (n+1)(i) = σ(i) or it meets a singularity before. So γ meets a singularity at least, when the lower left point of square σ^k(i) is a singularity.
Since 𝒪 has one singularity of degree 3, there are 3 saddle connections in direction (n, n+1).
* r_1 starts in the lower left vertex of square 1 and ends in the upper right vertex of 2n + 1.
* g starts in the lower left vertex of square 2n+1 and ends in the upper right vertex of 1.
* Hence the last saddle connection r_2 starts at the lower left vertex of square 2 and ends at the upper right vertex of square 2n.
We will now show, that the red saddle connections r_1 and r_2 are the upper boundary of a cylinder, namely Θ_r and the green saddle connection g is the upper boundary of a cylinder, namely Θ_g. For this we use separatrix diagrams. There is a nice introduction to separatrix diagrams in <cit.>. We will use just the ribbon graph structure of the separatrix diagram (without the pairings of the boundary components). The separatrix diagram of the origami L(2, 2n) with saddle connections in direction (n, n+1) is shown in Figure separatrix. The cyclic order of the vertex is as drawn.
We can find the cylinders with r_1 as a part of the upper boundary as follows: We follow the edge r_1 until we reach a vertex again. Then we follow the the next edge in the reserved cyclic order of the vertex. This is the edge r_2. Then we follow the next edge in the reserved cyclic order, which is again r_1 with which we started. So we have found the upper boundary of a cylinder.
With the same procedure we see that g is the upper boundary of a cylinder.
We show now, why this procedure gives the boundary's of the cylinders. We follow a saddle connection until we reach a singularity (which corresponds to a vertex in the separatrix diagram). If we want to know the boundary of the cylinder adjacent to our starting saddle connection, we follow a small path from the saddle connection anti clockwise around the singularity, until we reach a saddle connection again. In the separatrix diagram this is the next edge in the reserved cyclic order. We found the entire upper boundary of the cylinder, when we reach the starting saddle connection again.
Let us count the intersection numbers of the saddle connections and our chosen basis of the homology states in Figure intersectionNumberEven. We can represent an element of our basis of the homology by a horizontal or vertical curve c through (n, n+1)-lattice points. Then the saddle connection meets c exactly at the (n, n+1)-lattice points.
We will first compute the intersection numbers with g and conclude the intersection numbers with Θ_r from this.
Let us compute the intersection number Ω(Y_2, Θ_g). We represent Y_2 by the left border of the origami. The saddle connection g runs trough square 2n+1 to square 1 at the point that lies over (0, 1/n) on the right edge of square 1 and then up through the squares 2,…, n, while it meets the left border each of these square once. For sure n this is the left upper vertex which is equal to the right upper vertex. Next Θ_g runs through square n+1 without hitting its vertical edge. Finally Θ_g runs through squares n+2, …, 2n, hitting the left edge of each of them once, until it reaches square 1, where it runs in a singularity. During this Θ_g meets the left border n-1 times. In total we have Ω(Y_2, Θ_g) = -(n+ (n-1)) = -(2n-1).
The other intersection numbers with Θ_g can be computed similar.
By <Ref>, each (n, n+1)-lattice point lies on a saddle connection. So any lattice point, which does not lie on the green saddle connection, meets a red one. So we have
Ω(X_1, r) = (n + 1) - 1 = n,
because X_1 contains n+1 lattice points, from which 1 is green.
Analogously
Ω(X_2, r) = 2(n + 1) - 3 = 2n -1
Ω(Y_1, r) = n - 1
Ω(Y_2, r) = 2nn - (2n-1) = 2n(n-1)+1
Finally, we determine the combinatorial length of both cylinders.
(a) The combinatorial length of g is 2.
(b) The combinatorial length of r is n-1.
We note that the combinatorial length of a curve γ [0,1]→ X of a covering π X → Y is the multiplicity of the curve π(γ). Since the geodesic line is determined by the direction and a point of the geodesic line, the multiplicity of the curve γ is the number #{t ∈ (0,1], π(γ(t)) = π(γ(0))}.
(a) The green saddle connection meets a point x with π(x) = (0,0) at the upper right vertex of square n and at the upper right vertex of the square 1. Hence the combinatorial length is 2.
(b) By Lemma latpoints the red saddle connection meets the upper right vertex of each square, which meets not the green saddle connection. Hence the combinatorial length is 2n +1 - 2 = 2n -1.
unsrtdin
|
http://arxiv.org/abs/2307.03117v1
|
20230706164108
|
Patterning of nonlocal transport models in biology: the impact of spatial dimension
|
[
"Thomas Jun Jewell",
"Andrew L. Krause",
"Philip K. Maini",
"Eamonn A. Gaffney"
] |
nlin.PS
|
[
"nlin.PS",
"q-bio.CB",
"q-bio.TO"
] |
4pt
4pt
=.08em
=.05em
=.03em
=.65ex
=0pt
=.4ex
=0pt
==.5em
=.5em
|
http://arxiv.org/abs/2307.02337v1
|
20230705144824
|
FAM: Relative Flatness Aware Minimization
|
[
"Linara Adilova",
"Amr Abourayya",
"Jianning Li",
"Amin Dada",
"Henning Petzka",
"Jan Egger",
"Jens Kleesiek",
"Michael Kamp"
] |
cs.LG
|
[
"cs.LG"
] |
[
Linara Adilovarub
Amr Abourayyaikim
Jianning Liikim
Amin Dadaikim
Henning Petzkalund
Jan Eggerikim,graz
Jens Kleesiekikim
Michael Kampikim,rub,monash
ikimInstitute for AI in medicine (IKIM) at University Hospital Essen, Essen, Germany
rubRuhr-University Bochum, Bochum, Germany
monashMonash University, Melbourne, Australia
grazGraz University, Graz, Austria
lundLund University, Lund, Sweden
Linara [email protected]
flatness, relative flatness, regularization, optmization, generalization, deep learning
0.3in
]
Flatness of the loss curve around a model at hand has been shown to empirically correlate
with its generalization ability.
Optimizing for flatness has been proposed as early as 1994 by Hochreiter and Schmidthuber, and was followed by more recent successful sharpness-aware optimization techniques.
Their widespread adoption in practice, though, is dubious because of
the lack of theoretically grounded connection between flatness and generalization, in particular in light of the reparameterization curse—certain reparameterizations of a neural network change most flatness measures but do not change generalization.
Recent theoretical work suggests that a particular relative flatness measure can be connected to generalization and solves the reparameterization curse.
In this paper, we derive a regularizer based on this relative flatness that is easy to compute, fast, efficient, and works with arbitrary loss functions.
It requires computing the Hessian only of a single layer of the network, which makes it applicable to large neural networks, and with it avoids an expensive mapping of the loss surface in the vicinity of the model.
In an extensive empirical evaluation we show that this relative flatness aware minimization (FAM) improves generalization in a multitude of applications and models, both in finetuning and standard training.
We make the code available at https://github.com/kampmichael/RelativeFlatnessAndGeneralization/tree/main/RelativeFlatnessRegularizer(FAM)github.
§ INTRODUCTION
It has been repeatedly observed that the generalization performance of a model at hand correlates with flatness of the loss curve, i.e., how much the loss changes under perturbations of the model parameters <cit.>. The large-scale study by <cit.> finds that such flatness-based measures have a higher correlation with generalization than alternatives like weight norms, margin-, and optimization-based measures. The general conclusion is that flatness-based measures show the most consistent correlation with generalization.
Naturally, optimizing for flatness promises to obtain better generalizing models. <cit.> proposed a theoretically solid approach to search for large flat regions by maximizing a box around the model in which the loss is low. More recently, it was shown that optimizing a flatness-based objective together with an L2-regularization performs remarkably well in practice on a variety of datasets and models <cit.>.
The theoretical connection to generalization has been questionable, though, in particular in light of negative results on reparametrizations of ReLU neural networks <cit.>: these reparameterizations change traditional measures of flatness, yet leave the model function and its generalization unchanged, making these measures unreliable.
Recent work <cit.> has shown that generalization can be rigorously connected to flatness of the loss curve, resulting in a relative flatness measure that solves the reparameterization issue. That is, the generalization gap of a model f:→ depends on properties of the training set and a measure
κ(^l) := ∑_s,s'=1^d ⟨^l_s,^l_s'⟩· Tr(H_s,s'(^l)) ,
where ^l∈^d× m are the weights between a selected layer l with m neurons and layer l+1 with d neurons. Further, ⟨^l_s,^l_s'⟩=^l_s(_s'^l)^T is the scalar product of two row vectors (composed of the weights into neurons with index s and s' in layer l+1), Tr denotes the trace, and H_s,s' is the Hessian matrix containing all partial second derivatives with respect to weights in rows ^l_s and ^l_s':
H_s,s'(,f(S))= [∂^2 ℰ_emp(f,S)/∂ w_s,t∂ w_s',t' ]_1≤ t,t'≤ m .
Here, ℰ_emp is the empirical risk
_emp(f,S)=1/n∑_i=1^n(f(x_i),y_i)
on a dataset
S={(x_1,y_1),…,(x_n,y_n)}⊂× .
It is demonstrated that, measured on the penultimate layer, this measure highly correlates with generalization.
Sharpness-aware minimization (SAM) <cit.> also optimizes for a measure of flatness, but is not reparameterization invariant—even under L2-regularization its invariance is unclear, in particular wrt. neuronwise reparameterizations.
The reparamterization-invariant extension of SAM, ASAM <cit.> is not theoretically connected to generalization.
In this paper, we implement the relative flatness measure of <cit.> as a regularizer for arbitrary loss functions and derive its gradient for optimization. A remarkable feature of the relative flatness measure is that it is only applied to a single layer of a neural network, in comparison to classical flatness (and sharpness) which takes into account the entire network. <cit.> have shown that relative flatness in this layer corresponds to robustness to noise on the representation produced by this layer. Therefore, FAM nudges the entire network to produce a robust representation in the chosen layer.
At the same time, it does not require flatness wrt. the other weights, opening up the design space for good minima. Since it suffices to compute relative flatness wrt. a single layer, this regularizer and its gradient can be computed much more efficiently than any full-Hessian based flatness measure. Moreover, since the gradient can be computed directly, no double backpropagation is required.
In an extensive empirical evaluation we show that the resulting relative flatness aware minimization (FAM) improves the generalization performance of neural networks in a wide range of applications and network architectures: We improve test accuracy on image classification tasks (CIFAR10, CIFAR100, SVHN, and FashionMNIST) on ResNET18 (outperforming reported best results for this architecture), WideResNET28-10, and EffNet-B7 and compare it to SAM regularizer. In a second group of experiments we reduce DICE-loss substantially on a medical shape reconstruction tasks using autoencoders and stabilize the language model finetuning.
Our contributions are
(i) a novel regularizer (FAM) based on relative flatness that is easy to implement, flexible, and compatible with any thrice-differentiable loss function, and
(ii) an extensive empirical evaluation where we show that FAM regularization improves the generalization performance of a wide range of neural networks in several applications.
§ RELATED WORK
§.§.§ Flatness as generalization measure
Flatness of the loss surface around the weight parameters is intimately connected to the amount of information that the model with these parameters can be described with, i.e., if the region is flat enough and loss does not change, the parameters can be described with less precision still allowing to have a good performing model.
Correspondingly, the models in the flat region generalize better: <cit.> investigated a regularization that leads to a flatter region in the aforementioned sense.
Their results have shown that indeed such optimization leads to better performing models.
Following up, flatness of a minimizer was used to explain generalization abilities of differently trained neural networks <cit.>, where it was specifically emphasized that calculation of a Hessian for modern models is prohibitively costly.
Originating from the minimum description length criteria for finding better generalizing learning models, flatness became a pronounced concept in the search for generalization criteria of large neural networks.
The PAC-Bayes generalization bound rediscovers the connection of the Hessian as flatness characteristic with the generalization gap and the large-scale empirical evaluation <cit.> shows that all the generalization measures based on flatness (in some definition) highly correlate with the actual performance of models.
<cit.> considered an analytical approach to connect flatness with generalization, resulting in the measure of relative flatness that is used for regularization in this work.
Their approach splits the generalization gap into two parts termed feature robustness and representativeness.
While representativeness measures how representative the training data is for its underlying distribution, feature robustness measures the loss at small perturbations in feature layers.
Under the assumption of representative data, so that the training data can indeed be used to learn something about the true underlying distribution, feature robustness governs the generalization gap.
Further, at a local minimum, feature robustness is exactly described by relative flatness if target labels are locally constant (i.e., labels do not change under small perturbations of features).
This condition of locally constant labels is identified as a necessary condition for connecting flatness to generalization.
Finally, the paper demonstrates how the measure of relative flatness solves the reparameterization-curse discussed in <cit.>, rendering itself as a good candidate for an impactful regularizer.
§.§.§ Regularizing optimization
Regularization (implicit or explicit) is de facto considered to be an answer to the good generalization abilities of an overparametrized model.
New elaborate techniques of regularization allow to beat state-of-the-art results in various areas.
Obviously, flatness can be considered as a good candidate for a structural regularization, but since the size of the modern models grew significantly after the work of <cit.>, straightforward usage of the initial flatness measures is not feasible in the optimization.
Analogously, approaches to flatness stimulation from averaging over solutions <cit.> cannot be backpropagated and directly used in the optimization process.
The closest research to the flatness optimization is related to adversarial robustness—adversarial training aims at keeping the loss of a model on a constant (low) level in the surrounding of the training samples, which can be also done in the feature space <cit.>.
Several recent works proposed an optimizer for neural networks that is approximating the minimax problem of minimizing loss in the direction of the largest loss in the surrounding of the model.
One of them, sharpness aware minimization (SAM) <cit.> achieves state-of-the-art results in multiple tasks, e.g., SVHN, and allows for simple backpropagation through the proposed loss.
However, the exact proposed m-sharpness does not entirely correspond to the theoretical motivation proposed by <cit.> based on PAC-Bayes generalization bound, which might mean that the empirical success of SAM and its variants <cit.> cannot be explained by theoretical PAC-Bayes flatness of the solution <cit.>.
Thus, introducing a theoretically grounded flatness regularizer can be of an interest for community.
§ FLATNESS AWARE MINIMIZATION
In the following we give a detailed description of the proposed regularization.
For a differentiable loss function (S,) and a training set S, the regularized objective is
(S,) + λκ(^l) ,
where λ is the regularization coefficient and ^l∈^m× d denote the weights from layer l to l+1. To optimize this objective, we compute its gradient (and omit the training set S in the notation for compactness):
∇_(() + λκ(^l)) = ∇_() + λ∇_κ(^l)
Here, ∇_() is the standard gradient of the loss function. It remains to determine ∇_κ(^l).
For a neural network with L layers and weights =(^1,…,^L) with ^k∈^O^k× P^k and a specific layer l∈ [L] with weights ^l∈^d× m it holds that
∇_κ(^l) = e^l [2∑_s=1^d w^l_s Tr(H_s,i)]_i∈ [d]+
([∑_s,s'=1^d w^l_sw^l_s'∑_t=1^m ∂^3 ()/∂^k_o,p∂^l_s,t∂^l_s',t]_
p∈ [P^k]
o ∈ [O^k])_k∈ [L]
where e^l denotes the l-th standard unit vector in ^L.
For this proof, we simply apply product rule on the gradient of the regularizer, yielding two parts that we separately simplify.
∇_ κ(^l) = ∇_∑_s,s'=1^d w^l_sw^l_s' Tr(H_s,s')
= ∑_s,s'=1^d (∇_w^l_sw^l_s') Tr(H_s,s')
+∑_s,s'=1^d w^l_sw^l_s'∇_ Tr(H_s,s')
= [∑_s,s'=1^d (∂/∂^kw^l_sw^l_s') Tr(H_s,s') _(I)]_1≤ k ≤ L
+[∑_s,s'=1^d w^l_sw^l_s'∂/∂^kTr(H_s,s') _(II)]_1≤ k ≤ L
Let us simplify both parts, starting with (I), which is =0 for all k≠ l. For k=l it simplifies to
∑_s,s'=1^d (∂/∂^lw^l_sw^l_s') Tr(H_s,s')
= [∑_s,s'=1^d (∂/∂^l_iw^l_sw^l_s') Tr(H_s,s')]_1≤ i≤ d
Now for each i∈[d] we have that
∑_s,s'=1^d (∂/∂^l_iw^l_sw^l_s') Tr(H_s,s')
= 2∑_s=1^d w^l_s Tr(H_s,i) ,
where we have used the symmetry of H_s,s' and the commutativity of the inner product in the last step.
Therefore, it holds that
∑_s,s'=1^d (∂/∂^lw^l_sw^l_s') Tr(H_s,s')
= [2∑_s=1^d w^l_sw^l_i Tr(H_s,i)]_1≤ i≤ d .
For the second part (II), let ^k∈^O× P. Then, ∂/∂^kTr(H_s,s') can be expressed as
∂/∂^kTr(H_s,s')= ∂/∂^kTr[∂^2 ()/∂^l_s,t∂^l_s',t']_1≤ t,t'≤ m
= ∂/∂^k∑_t=1^m ∂^2 ()/∂^l_s,t∂^l_s',t
= [∑_t=1^m ∂^3 ()/∂^k_o,p∂^l_s,t∂^l_s',t]_1≤ p≤ P
1≤ o ≤ O
Putting (I) and (II) together finally yields
∇_ κ(^l) = e^l [2∑_s=1^d w^l_sw^l_i Tr(H_s,i)]_1≤ i≤ d
+[∑_s,s'=1^d w^l_sw^l_s'∑_t=1^m ∂^3 ()/∂^k_o,p∂^l_s,t∂^l_s',t]_1≤ k ≤ L
1≤ p≤ P^k
1≤ o ≤ O^k
where e^l denotes the l-th standard unit vector in ^L.
§.§ Computational Complexity
Computing the FAM regularizer requires computing the Hessian wrt. the weights ^l∈^d× m of the feature layer, which has computational complexity in d^2m^2. From this, the individual H_s,s' can be selected. The inner product computation has complexity dm, so that the overall complexity of computing the regularizer is in d^2m^2.
In order to train with the FAM regularizer, we have to compute the gradient of the regularized loss wrt. the weights of the network. Computing the gradient of the loss function in equation <ref> has complexity ||, where || denotes the number of parameters in . The computation of ∇_κ(^l) is decomposed into the sum of two parts in Lemma <ref>. The first part has complexity d^2m^2 for computing the Hessian and the inner product, as before. All parts in the sum, however, have already been computed when computing κ(^l). The second part requires computing the derivative of the Hessians H_s,s' wrt. each parameter in . Since we only need to compute the derivative wrt. the trace, i.e., the sum of diagonal elements, the complexity is in . Therefore, the overall complexity of computing the FAM regularizer is in
|| + d^2m^2 +||=|| + d^2m^2 .
That is, the additional computational costs for using the FAM regularizer is in d^2m^2 per iteration, i.e., in the squared number of weights of the selected feature layer.
In practice, computational time currently can exceed the vanila and SAM training time on 20%-40%.
We also observe that GPU utilization for our implementation is still not optimal, which possibly can lead to delays in computation.
§.§ A Simplified Relative Flatness Measure
A more computationally efficient approximation to relative flatness, proposed by <cit.>, does not iterate over individual neurons, but computes the weight norm of layer l and the trace of the Hessian wrt. layer l:
κ(^l)=^l^2_2 Tr(H ) .
Computing this measure not only avoids the loop over all pairs of neurons s,s'∈ [d], but also allows us to approximate the trace of the Hessian, e.g., with Hutchinson's method <cit.>.
On top of the computational efficiency, the trace approximation reduces the memory footprint, enabling us to employ FAM regularization to even larger layers—including large convolutional layers.
We provide details on the implementation of Hessian computation and Hessian trace approximation in Appendix <ref>.
§ EXPERIMENTS
In the following section we describe the empirical evaluation of the proposed flatness regularization.
We compare the performance of FAM to the baseline without flatness related optimization and to SAM.
We use the SAM implementation for pytorch [<https://github.com/davda54/sam>] with the parameters of the base optimizer recommended by the authors.
It should be mentioned here that no matter of its popularity there is no official pytorch implementation of the SAM optimizer, which results in multitude of different implementations for each of the paper using the approach.
Moreover, there are multiple tricks that should be considered when using SAM, e.g., one should take care of normalization layers and check on which of the two optimization steps they are active or non-active.
We run SAM for the same amount of epochs that FAM and simple optimization, no matter that in the original work the authors doubled the amount of epochs for non-SAM approaches due to the doubled run time, thus giving SAM an advantage in our experiments.
Reported result for one of the pytorch implementations of SAM on CIFAR10 with ResNet20 is 93.5% test accuracy [<https://github.com/moskomule/sam.pytorch>].
This is the closest reported result to our setup and it should be expected that ResNet18 shows worse result than ResNet20.
Unfortunately, the results for CIFAR100, SVHN, and FashionMNIST are not reported in the implementations of SAM for pytorch.
We use the FAM regularizer computed on the penultimate layer (or bottleneck layer), since it was demonstrated to be predictive of generalization in <cit.>. Investigating the impact of the regularizer on other layers is left for future work.
Note on other flat-minima optimizers:
There are several extensions of SAM <cit.> and other flat-minima optimizers, e.g., <cit.>.
We follow <cit.> and do not consider them in this work due to their computational cost and/or lack of performance gains compared to original SAM.
§.§ Image Classification
Standard datasets for image classification are the baseline experiments that confirm the effectiveness of the proposed regularization.
In particular, we worked with CIFAR10 and CIFAR100 <cit.>, SVHN <cit.>, and FashionMNIST <cit.>.
We compare our flatness regularized training to the state-of-the-art flatness regularizer SAM.
For this group of experiments we used the setups from the original SAM paper in order to compare to its performance.
Nevertheless, due to the different implementation, the exact numbers reported seem to be unachievable—while we still see the improvement from using SAM optimizer, both no regularization baseline and SAM baseline are lower than in the original paper.
For all experiments in this group we use the original neuronwise flatness measure for regularization without approximations introduced in Sec. <ref>.
§.§.§ CIFAR10
We have chosen ResNet18 as an architecture to solve CIFAR10.
While ResNet18 is not the state of the art for this problem, it allows to confirm the hypothesis about performance of our method.
The reported accuracy of this architecture on CIFAR10 is 95.55%.
In our experiments we compare this baseline, that is not using flatness-related optimizations to SAM approach and our proposed regularization.
Standard augmentation strategy is applied, including randomized cropping and horizontal flipping and normalization of the images.
For baseline training we use the following parameters of optimization: SGD with batch size 64, weight decay of 5e-4, momentum 0.9, and cosine annealing learning rate starting at 0.03 during 250 epochs.
For FAM the optimizer parameters are kept same and λ selected to be 0.1.
Finally SAM was ran with SGD with a scheduler learning rate 0.01 and momentum 0.9.
We report the results achieved in Table <ref> in the row corresponding to CIFAR10.
§.§.§ CIFAR100
For solving this dataset we follow the approach taken by <cit.>.
We use an EfficientNet <cit.> (EffNet-B7) that is pretrained on ImageNet and then finetune it for CIFAR100.
In order to obtain a good finetuning result the inputs should be at least in the ImageNet format (224 × 224) which significantly slows down the training.
For standard training and FAM regularized training, the Adam optimizer had consistently the highest performance (compared to SGD and rmsprop) with a batch size of B=32. The architecture achieves a baseline accuracy of 84.6 without regularization, and SAM achieves an accuracy of 85.8. The FAM regularizer improves the accuracy to 87.15.
Note, that different from other setups, where both SAM and FAM were used together with the same hyperparameters as the vanila training, here we optimized parameters separately to avoid overfitting.
We report the results achieved in Table <ref> in the row corresponding to CIFAR100.
§.§.§ SVHN and FashionMNIST
Both SVHN and FashionMNIST problems are reported to reach state-of-the-art performance with SAM optimization using WideResNet28-10 architecture <cit.>.
It should be noted that SAM achieves the reported state-of-the-art result on these datasets when combined together with shake-shake regularization technique <cit.>, which we omitted.
The results reported by <cit.> for SVHN are obtained using the training dataset that includes extra data (overall ∼ 600000 images).
Due to the time constraints we report results of training using only main training dataset (∼ 70000 images).
We apply AutoAugment SVHN policy <cit.>, random cropping and horisontal flip, and cutout <cit.> with 1 hole of length 16.
Our training parameters are 100 epochs, learning rate of 0.1 with a multistep decay by 0.2 after 0.3, 0.6 and 0.8 of the training epochs, batch size of 128, optimizer is Nesterov SGD with momentum of 0.9 and weight decay of 5e-4.
For FAM we use λ=0.1.
We modify FashionMNIST to have three channels, resize to 32 × 32, apply cutout with 1 hole of length 16, and normalize by 0.5.
The training of FashionMNIST is very unstable and has oscillating learning curves with and without regularization.
The used batch size is 64, learning rate is 0.01 with the same learning rate scheduler as for SVHN, the training is done for 200 epochs. Weight decay and momentum are set as in SVHN training.
Finally, in order to apply more computationally expensive neuronwise flatness regularization, we add one more penultimate fully-connected layer in the architecture of WideResNet with 64 neurons.
Our experiments reveal that this additional layer does not change the outcome of the training in case of non-flatness regularized run.
With the described setup we did not achieve the accuracy reported in the original paper, that are 0.99 ± 0.01 error for SAM on SVHN with auto-augmentation and 1.14 ± 0.04 for baseline training on SVHN with auto-augmentation; 3.61 ± 0.06 error for SAM on FashionMNIST with cutout and 3.86 ± 0.14 for baseline training on FashionMNIST with cutout. We report the results achieved in Table <ref> in the rows corresponding to SVHN and FashionMNIST.
§.§ Medical Shape Reconstruction
3D shape reconstruction has important applications in both computer vision <cit.> and medical imaging <cit.>.
Machine learning methods for shape reconstruction have become increasingly popular in recent years, however, often suffer from bad generalization, i.e., a neural network cannot generalize properly to shape variations that are not seen during training.
In this experiment, we demonstrate that FAM regularizer can effectively mitigate the generalization problem in a skull shape reconstruction task, where a neural network learns to reconstruct anatomically plausible skulls from defective ones <cit.>.
Here, due to the large size of the layers, we used the approximated layerwise flatness measure for FAM optimization.
We do not include SAM baseline here, while the general question here is whether flatness can aid in generalization for the current task.
§.§.§ Dataset
The skull dataset used in this experiment contains 100 binary skull images for training and another 100 for evaluation.
The surface of a skull shape is constituted by the non-zero voxels (i.e., the `1's), and we create defective skulls by removing a portion of such voxels from each image.
For the evaluation set, two defects are created for each image - one is similar to the defects in the training set while the other is significantly different in terms of its shape and size, as well as its position on the skull surface.
The dimension of the skull images is 64^3.
§.§.§ Network Architecture and Experimental Setup
The neural network (∼ 1M trainable parameters) follows a standard auto-encoder architecture, in which five two-strided convolutional and deconvolutional layers are used for downsampling and upsampling respectively.
The output of the last convolutional layer is flattened and linearly mapped to an eight-dimensional latent code, which is then decoded by another linear layer before being passed on to the first deconvolution.
The network takes as input a defective skull and learns to reconstruct its defectless counterpart.
As a baseline we train the network using a Dice loss <cit.>, and a Dice loss combined with the FAM regularizer
, which is applied to the second linear layer (of size 64× 8) of the network.
We experimented with different coefficients λ that weigh the regularizer against the Dice loss. All experiments use the Adam optimizer with a constant learning rate of 10^-4.
The trained models are evaluated on the two aforementioned evaluation sets, using Dice similarity coefficient (DSC), Hausdorff distance (HD), and 95 percentile Hausdorff distance (HD95). DSC is the main metric in practice for skull shape reconstruction <cit.>, measuring how well two shapes overlap (the higher the better[The Dice loss (Figure <ref>), on the contrary, is usually implemented as 1 - DSC, which we minimize during training.]), while the distance measures i.e., HD and HD95 are supplementary.
§.§.§ Results and Discussion
Figure <ref> shows the Dice loss curves under different weighting coefficients λ.
Table <ref> shows the quantitative results on the two evaluation sets, and Figure <ref> shows the distribution of the evaluation results for λ=0.02, 0.002, 0.0006 and the baseline.
The DSC (100), HD (100) and HD95 (100) columns in Table <ref> show the evaluation results at an intermediate training checkpoint (epoch 100).
These results reveal several interesting findings: (i) At both the intermediate (epoch=100) and end checkpoint (epoch=200), the training loss of the baseline network is clearly lower than that of the regularized networks (Figure <ref>), whereas its test accuracy is obviously worse than its regularized counterparts in terms of all metrics (Table <ref>); (ii) The baseline network achieves higher test accuracy (DSC) at the intermediate checkpoint than at the end checkpoint, which is a clear indicator of overfitting, while the test accuracy of a properly regularized network (e.g., λ=0.02, 0.002) on either evaluation set 1 or evaluation set 2 keeps improving as training progresses; (iii) Even a very loose regularization (e.g., λ=0.0006) can prevent the Dice loss from decreasing until overfitting, as opposed to the baseline network (Figure <ref>); (iv) It is also worth mentioning that the scores on both evaluation sets stay essentially unchanged for the FAM-regularized network (e.g., λ=0.02), indicating that moderately altering the defects (e.g., defect shape, size, position) does not affect the network's performance, while in contrast, the baseline network performs worse on evaluation set 2 than on evaluation set 1 in terms of all metrics.
Choosing a proper λ is important for a desired reconstructive performance. A large λ enforces a flat(ter) curve of the loss with respect to the weights of the second linear layer, which is responsible for decoding the latent codes. However, over-regularization (in our case λ=0.1, 0.7) can lead to unvaried shape reconstructions by the decoder, since, in order for the loss to remain unchanged, the second linear layer has to give the same decoding for different latent codes [Different skull shapes are expected to be encoded differently through the downsampling path of the auto-encoder.]. Therefore, the quantitative results for λ=0.1, 0.7 in Table <ref> should be interpreted with care, i.e., the over-regularized networks `find' a universal reconstruction that somehow matches well with different evaluation cases (hence achieving relatively high DSC), which nevertheless defies the rule of case-specific reconstruction.
§.§ Transformers
Since the introduction of transformers <cit.>, large language models have revolutionized natural language processing by consistently pushing the state-of-the-art in various benchmark tasks <cit.>. However, a recurring challenge in the fine-tuning process of these models is the occurrence of instabilities <cit.>. These instabilities can negatively impact the performance and reliability of the fine-tuned models. In the following section we demonstrate how the application of FAM can improve the downstream performance of transformers. We do not compare with SAM both for the reasons stated in the previous set of the experiments and also due to impossibility to integrate additional gradient iteration into the used implementation of BERT.
We fine-tune BERT_BASE (110 million parameters) <cit.> to the Recognizing Textual Entailment (RTE) dataset <cit.> from the General Language Understanding Evaluation benchmark <cit.>. The dataset consists of sentence pairs with binary labels that indicate whether the meaning of one sentence is entailed from its counterpart. In the past, this dataset was found to be particularly prone to instabilities <cit.>.
In stark contrast to other experiments, we chose a much larger weighting coefficient λ=3e6, as lower values had no influence on the training. Our training setup involved a learning rate of λ=2e-5, a batch size of 32, and a maximum sequence length of 128 for 20 epochs. We report the average development set accuracy across five runs with different random seeds. Table <ref> presents the results of this experiment.
We observed a progressive increase in validation loss throughout the training when the regularizer was not employed, indicating severe overfitting. While this phenomenon persisted with FAM, its effect was less pronounced, as depicted in Figure <ref>.
§ DISCUSSION AND CONCLUSION
We have shown that regularization based on the theoretically sound relative flatness measure improves generalization in a wide range of applications and model architectures, outperforming standard training and sometimes SAM.
In our experiments (except for the skull reconstruction experiments, due to the specific architecture of the network), we have chosen the penultimate layer to compute relative flatness, as suggested by <cit.>. Their theory ensures that achieving flatness in any one layer suffices to reach good generalization.
We leave a comprehensive empirical study of the impact of the choice of layer (or even using multiple layers) on model quality for future work.
It can be also investigated whether flatness regularized on one of the layers also changes the flatness of other layers or not.
Relative flatness is connected to generalization under the assumption of locally constant labels in the representation.
This assumption holds already for the input space in many applications (e.g., image classification, and NLP)—the definition of adversarial examples hinges on this assumption.
It implies, however, that flatness is not connected to generalization for tasks where the assumption is violated.
The recent study by <cit.> supports this empirically by showing that regularizing wrt. flatness is not always beneficial.
For future work it would be interesting to verify this study with FAM, testing the assumption of locally constant labels, and expanding it to further tasks.
While current implementation of the FAM regularizer allows for achieving better performance, the performance with respect to the space consumption can be improved, as well as computational time.
This currently also limits the applicability to convolutional layers, since treating them like a standard layer would increase the number of parameters greatly.
This can be overcome by determining the correct structure of the FAM regularizer for convolutional layers and is an interesting direction for future work.
icml2023
§ HESSIAN COMPUTATION AND APPROXIMATION
In practice, the training time for FAM regularization depends on the method used for calculating the Hessian, respectively approximating its trace in case of the simplified relative flatness measure. In the following, we discuss several practical approaches in pytorch <cit.>.
§.§ Computation of the Full Hessian
Computing the Hessian, i.e., the second derivatives wrt. a neural network's weights, can straight-forwardly be done in pytorch using its autograd library. This method, however, is not optimized for runtime. The torch.autograd library also provides an experimental vectorized version of the Hessian computation. It uses a vectorization map as the backend to vectorize calls to autograd.grad, which means that it only invokes it once instead of once per row, making it more computationally efficient. We compare the non-vectorized to the vectorized variant of torch.autograd. Recently, the pytorch library functorch (in beta) provided a fast Hessian computation method build on top of the autograd library and also using a vectorization map. Additionally, it uses XLA, an optimized compiler for machine learning that accelerates linear algebra computations. This further accelerates Hessian computation, but does not yet work with all neural networks—in particular, the functorch Hessian computation requires batch normalization layers to not track the running statistics of training data. In Figure <ref> we show that using the vectorized approach substantially reduces computation time by up to three orders of magnitude. For larger Hessians, the functorch library further improves runtime over the vectorized autograd method by an order of magnitude.
All experiments are performed on an NVIDIA RTX A6000 GPU.
§.§ Computation of the Trace of the Hessian
When the layers are high-dimensional, forming the full Hessian can be memory and computationally expensive. Since FAM requires the calculation of the trace of a Hessian, we apply the trick of using the Hutchinson's method <cit.> to approximate the trace of the Hessian. The version of Hutchinson's trick we use is described as follows:
Let A ∈ℝ^D × D and v ∈ℝ^D be a random vector such that 𝔼[v v^T]=I. Then,
Tr(A)=𝔼[v^T A v]=1/V∑_i=1^V v_i^T A v_i .
where v is generated using Rademacher distribution and V is the number of Monte Carlo samples. The intuition behind this method is that by averaging over many random vectors, we can obtain an estimate of the trace of the matrix. It has been proved that the trace estimator converges with the smallest variance to the trace if we use Rademacher random numbers <cit.>. This method is in general very useful when we need to compute the trace of a function of a matrix.
Computational time for the direct functorch computation of the Hessian trace and for the Hutchinson's trick is shown in Figure <ref>.
|
http://arxiv.org/abs/2307.03754v1
|
20230704120626
|
COME: Cylindrical oriented muon emission in GEANT4 simulations
|
[
"Ahmet Ilker Topuz"
] |
physics.ins-det
|
[
"physics.ins-det"
] |
COME: Cylindrical oriented muon emission in GEANT4 simulations
A. Ilker Topuz^1
^1Institute of Physics, University of Tartu, W. Ostwaldi 1, 50411, Tartu, Estonia
=====================================================================================
In this study, a source scheme based on source biasing as well as discrete energy spectrum in the cylindrical geometry is presented for the simulations of muon tomography in the GEANT4 toolkit. First, a lateral cylindrical surface as well a top circular disc acts as a generation surface that surrounds the tomographic setup, and the generated muons are directed towards the origin where the target volume is situated. Secondly, the kinetic energy of the entering muons is assigned by using a 80-bin discrete energy spectrum between 0 and 8 GeV that is extracted from the CRY muon generator. Thus, the present recipe is called cylindrical oriented muon emission (COME). This source scheme may especially find its applications in the cases where the lateral muon detectors are utilized in order to profit from the horizontal or horizontal-like muons.
Keywords: Muon tomography; GEANT4; Monte Carlo simulations; Cylindrical source; Source biasing; Discrete energy spectra
§ INTRODUCTION
Muon tomography <cit.> is a promising technique that is practiced in order to discriminate the target volumes such as nuclear materials. Regarding the GEANT4 simulations <cit.> performed in the area of muon tomography, a number of muon generators like EcoMug <cit.> have been already proposed by providing a couple of options in terms of the source geometry under certain conditions such as the continuous energy spectrum, the continuous angular spectrum, and the coupling between the these two spectra. By recalling the existence of the tomographic configurations where the lateral detector layers are also utilized to benefit from the horizontal muons <cit.> as shown in Fig <ref>, the cylindrical sources might be at disposal to cover all the possible angles.
In the present study, a cylindrical source scheme hinged on the source biasing and the discrete energy spectrum is exhibited by using G4ParticleGun in the GEANT4 toolkit (see Appendix A). Both the lateral and the top surfaces are employed in order to include all the possible directions, and a fraction parameter governs the corresponding percentage of the generated muons on either surface. The generated muons on these surfaces are oriented towards the target volume via the vector construction between the generation point and the origin, and the kinetic energies are assigned through the discretization of the CRY energy spectrum between 0 and 8 GeV as described in the other studies <cit.>. This study is organized as follows. The methodology is explained in section <ref> where section <ref> shows the muon emission from the lateral cylindrical surface, while section <ref> refers to the emission from the top disc. Finally, the conclusions are drawn in section <ref>.
§ METHODOLOGY
§.§ Emission from lateral cylindrical surface
The lateral surface generation is initiated by randomizing the azimuthal angle φ
φ=2×π× G4UniformRand()
The coordinate transformation yields the generated points on the lateral cylindrical surface for a cylinder of radius R and height h in the Cartesian coordinates as described in
x_i=R×cosφ
and
y_i=h× G4UniformRand()
and
z_i=R×sinφ
Then, the generated particles on the cylindrical surface are directed to the origin in order to minimize the particle loss
x_f=0,
y_f=0,
z_f=0
Next, by constructing a vector from the lateral cylindrical surface to the origin, one obtains
px=x_f-x_i,
py=y_f-y_i,
pz=z_f-z_i
Thus, the selective momentum direction denoted by P⃗=(P_x, P_y, P_z) is
P_x=px/√(px^2+py^2+pz^2),
P_y=py/√(px^2+py^2+pz^2),
P_z=pz/√(px^2+py^2+pz^2)
§.§ Emission from circular top surface
The generation of the circular top surface is also initiated by randomizing the azimuthal angle φ as shown in <cit.>
φ=2×π× G4UniformRand()
The generation points on the top disk in the Cartesian coordinates are given by
x_i=R×√( G4UniformRand())×cosφ
and
y_i=h
and
z_i=R×√( G4UniformRand())×sinφ
Then, the generated particles on the circular surface are again directed to the origin
x_f=0,
y_f=0,
z_f=0
Next, by constructing a vector from the disc surface to the origin, one obtains
px=x_f-x_i,
py=y_f-y_i,
pz=z_f-z_i
Thus, the selective momentum direction denoted by P⃗=(P_x, P_y, P_z) is
P_x=px/√(px^2+py^2+pz^2),
P_y=py/√(px^2+py^2+pz^2),
P_z=pz/√(px^2+py^2+pz^2)
Finally, the simulation preview through the present scheme is displayed in Fig <ref>.
§ CONCLUSION
To conclude, the algorithmic recipe of a biased cylindrical source scheme with a discrete energy spectrum is presented for the applications of muon tomography in the GEANT4 simulations. This scheme might be introduced into the muon generators such as CRY in order to prefer the cylindrical surfaces rather than a planar generation.
§ APPENDIX A - BIASED DISCRETE CYLINDRICAL MUON SOURCE
#include "B1PrimaryGeneratorAction.hh"
#include "G4LogicalVolumeStore.hh"
#include "G4LogicalVolume.hh"
#include "G4Box.hh"
#include "G4RunManager.hh"
#include "G4ParticleGun.hh"
#include "G4ParticleTable.hh"
#include "G4ParticleDefinition.hh"
#include "G4SystemOfUnits.hh"
#include "Randomize.hh"
#include <iostream>
using namespace std;
//....oooOO0OOooo........oooOO0OOooo........oooOO0OOooo........oooOO0OOooo......
B1PrimaryGeneratorAction::B1PrimaryGeneratorAction()
: G4VUserPrimaryGeneratorAction(),
fParticleGun(0),
fEnvelopeBox(0)
G4int n_particle = 1;
fParticleGun = new G4ParticleGun(n_particle);
// default particle kinematic
G4ParticleTable* particleTable = G4ParticleTable::GetParticleTable();
G4String particleName;
G4ParticleDefinition* particle
= particleTable->FindParticle(particleName="mu-");
fParticleGun->SetParticleDefinition(particle);
//....oooOO0OOooo........oooOO0OOooo........oooOO0OOooo........oooOO0OOooo......
B1PrimaryGeneratorAction:: B1PrimaryGeneratorAction()
delete fParticleGun;
//....oooOO0OOooo........oooOO0OOooo........oooOO0OOooo........oooOO0OOooo......
void B1PrimaryGeneratorAction::GeneratePrimaries(G4Event* anEvent)
//Discrete probabilities
double A[]= 0.0, 0.01253639, 0.02574546, 0.02802035, 0.02706636, 0.03528534, 0.02826496,
0.03157946, 0.03078447, 0.02777574, 0.02546415, 0.03150608, 0.02815489,
0.02580661, 0.02364179, 0.02170935, 0.02152589, 0.02348279, 0.02134243,
0.0196913, 0.02036398, 0.01841931, 0.01718402, 0.01700056, 0.01624226,
0.01539835, 0.01536166, 0.01471344, 0.01422421, 0.01412637, 0.01284215,
0.01260977, 0.01213278, 0.0129033, 0.01248746, 0.01196155, 0.01064064,
0.01057949, 0.0096255, 0.0103838, 0.00928304, 0.00879382, 0.00884274,
0.00793767, 0.00786429, 0.00769306, 0.00709376, 0.00736283, 0.0071916,
0.00721607, 0.00692253, 0.00643331, 0.00678799, 0.00673907, 0.00618869,
0.00634769, 0.00665346, 0.00650669, 0.00561385, 0.00589516, 0.00589516,
0.00578508, 0.00557716, 0.00550378, 0.00434187, 0.0043541, 0.00408503,
0.00364472, 0.00399941, 0.00388934, 0.00396272, 0.00431741, 0.00368142,
0.00363249, 0.00362026, 0.00410949, 0.00336342, 0.00358357, 0.00362026,
0.00348573, 0.0035958;
//Discrete energies
double B[]= 0.0, 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000,
1100, 1200, 1300, 1400, 1500, 1600, 1700, 1800, 1900, 2000,
2100, 2200, 2300, 2400, 2500, 2600, 2700, 2800, 2900, 3000,
3100, 3200, 3300, 3400, 3500, 3600, 3700, 3800, 3900, 4000,
4100, 4200, 4300, 4400, 4500, 4600, 4700, 4800, 4900, 5000,
5100, 5200, 5300, 5400, 5500, 5600, 5700, 5800, 5900, 6000,
6100, 6200, 6300, 6400, 6500, 6600, 6700, 6800, 6900, 7000,
7100, 7200, 7300, 7400, 7500, 7600, 7700, 7800, 7900, 8000;
G4int SizeEnergy=sizeof(B)/sizeof(B[0]);
G4int SizeProbability=sizeof(A)/sizeof(A[0]);
G4double Grid[sizeof(B)/sizeof(B[0])];
double sum=0;
for(int x=0; x < 81; x++)
sum=sum+A[x];
Grid[x]=sum;
G4double radius=100*cm; //Radius of cylinder
G4double height=150*cm; //Height of cylinder
//Centerally focused cylindrical source via coordinate transformation - by AIT
G4double theta=2*3.14159265359*G4UniformRand();
G4double fraction=G4UniformRand();
G4double x0;
G4double y0;
G4double z0;
if(fraction<=0.5)
//Coordinates on lateral cylindrical surface
x0=radius*cos(theta);
y0=height*G4UniformRand();
z0=radius*sin(theta);
//Coordinates on circular surface
if(fraction>0.5)
x0=radius*sqrt(G4UniformRand())*cos(theta);
y0=height;
z0=radius*sqrt(G4UniformRand())*sin(theta);
fParticleGun->SetParticlePosition(G4ThreeVector(x0,y0,z0));
//Aimed at origin
G4double x1=0;
G4double y1=0;
G4double z1=0;
G4double mx = x1-x0;
G4double my = y1-y0;
G4double mz = z1-z0;
G4double mn = sqrt(pow(mx,2)+pow(my,2)+pow(mz,2));
mx = mx/mn;
my = my/mn;
mz = mz/mn;
fParticleGun->SetParticleMomentumDirection(G4ThreeVector(mx,my,mz));
G4double Energy=0; //Just for initialization
G4double pseudo=G4UniformRand();
for (int i=0; i < 81; i++)
if(pseudo > Grid[i] pseudo <= Grid[i+1])
Energy=B[i+1];
std::ofstream EnergyFile;
EnergyFile.open("Energy.txt", std::ios::app);
EnergyFile << Energy << G4endl;
EnergyFile.close();
fParticleGun->SetParticleEnergy(Energy);
fParticleGun->GeneratePrimaryVertex(anEvent);
//....oooOO0OOooo........oooOO0OOooo........oooOO0OOooo........oooOO0OOooo......
ieeetr
|
http://arxiv.org/abs/2307.00238v1
|
20230701055945
|
Unified Transfer Learning Models for High-Dimensional Linear Regression
|
[
"Shuo Shuo Liu"
] |
stat.ML
|
[
"stat.ML",
"cs.LG"
] |
Unified Transfer Learning Models for High-Dimensional Linear Regression
Shuo Shuo Liu [email protected]
Columbia University
New York, NY 10032,
United States
August 1, 2023
============================================================================================================
Transfer learning plays a key role in modern data analysis when: (1) the target data are scarce but the source data are sufficient; (2) the distributions of the source and target data are heterogeneous.
This paper develops an interpretable unified transfer learning model, termed as UTrans, which can detect both transferable variables and source data.
More specifically, we establish the estimation error bounds and prove that our bounds are lower than those with target data only.
Besides, we propose a source detection algorithm based on hypothesis testing to exclude the nontransferable data.
We evaluate and compare UTrans to the existing algorithms in multiple experiments.
It is shown that UTrans attains much lower estimation and prediction errors than the existing methods, while preserving interpretability.
We finally apply it to the US intergenerational mobility data and compare our proposed algorithms to the classical machine learning algorithms.
High-dimensional inference; High-dimensional linear regression; Multi-task learning; Penalized regression; Transfer learning
§ INTRODUCTION
Predictive models, which employ the training data to make predictions, have been effectively used to guide decision making in various applications (see some methods and applications in <cit.>).
Modern data extraction techniques further improve model predictions and statistical inference by utilizing a collection of massive and diverse data.
With data from multiple sources, the superior predictive ability of these models relies on the hypothesis that these multi-source data share a homogeneous or similar distribution.
When such hypothesis fails, most predictive models using the training data lose the prediction power and require reconstruction by gathering new data from the same distribution.
However, the cost of collecting new data or the privacy limit of integrating multiple data may hinder the reconstruction.
To improve the predictive performance, one of the possible solutions is to transfer and integrate the useful source data.
In this scenario, transferring data knowledge from one source (namely, source data) to another (namely, target data) would be required, of which the learning process is called transfer learning in the literature <cit.>.
The three main themes for researchers in transfer learning are: what to transfer, when to transfer, and how to transfer?
Transfer learning has drawn extensive attention for decades and been applied in many fields including Web-document classification, Wifi data calibration, medical diagnosis, and so on.
See more examples in the recent survey papers <cit.>.
Besides these applications of transfer learning in the machine learning community, some methodological and theoretical works are also developed.
<cit.> establishes the transferability measure on the discrete and continuous data by the χ^2-distance between the source and target data.
<cit.> studies a two-stage empirical risk minimization procedure to transfer learning and provides generalization bounds with general losses, tasks, and features.
However, little attention has been paid to transfer learning in the statistical framework, which can generate interpretable results and study the theoretical guarantees.
In this paper, we aim to fill this gap, develop new statistical transfer learning models in the context of high-dimensional linear regression, and improve the predictive performances of the existing transfer learning models.
Next we review some recent transfer learning models for high-dimensional regression in the literature.
§.§ Statistical transfer learning models
High-dimensional linear models based on one source data with suitable regularizations have been developed extensively over the past decade <cit.> due to the high-dimensional nature of real-world data.
For example, in gene expression data, it is common to encounter a few observations but hundreds of thousands of genes.
In financial data, it is widely seen that the number of features is much larger than the number of individual stocks.
The high-dimensional linear regression model, with single-source data, takes the form
y_1= X_1β_1+ϵ_1,
where y_1∈ℝ^n_1, X_1∈ℝ^n_1× p, β_1∈ℝ^p, and ϵ_1∈ℝ^n_1.
With the high-dimensional data, we allow the dimension p≫ n_1 for the unknown coefficient vector β_1.
Transfer learning has been studied recently in statistical models <cit.>.
For example, in the high-dimensional linear regression model <cit.>, the target model is
y_0i= x_0i^⊤β_0+ϵ_0i, i=1,⋯, n_0
and the source model from the k-th source data, k=1,⋯,K', is
y_ki= x_ki^⊤β_k+ϵ_ki, i=1,⋯, n_k,
where x_ki∈ℝ^p and β_k∈ℝ^p, k=0,1,⋯,K'.
Useful source data are transferred to the target data only if the transferring set 𝒜_h satisfies
𝒜_h={1≤ k≤ K': β_0-β_k_q≤ h}
for a relatively small transferring level h.
This model, named Trans-Lasso, leverages the linear regression model to bridge the source and target data and transfers source data to the target data when k∈𝒜_h.
Trans-Lasso solves w from the source data in the first step and then debiases the estimation from the target data in the second step.
Let n_𝒜_h denote the sample size of the source data in 𝒜_h.
More specifically, the first step solves
ŵ= w ∈ℝ^pmin{1/2n_𝒜_h∑_k ∈𝒜_h y^(k)- X^(k) w_2^2+λ w_1}
via integrating the diverse information from multiple sources.
<cit.> and <cit.> extend the results of <cit.> and <cit.> to high-dimensional generalized linear models (GLMs) and functional linear models, respectively.
Minimax optimal rates of the estimation error are also established.
Some consistent estimators of 𝒜_h are required, such as the Q-aggregation <cit.> and data-splitting estimator under some conditions <cit.>.
Other nonparametric predictive models also exist in the literature, such as the adaptive transfer learning with minimax optimal rates of convergence based on k-nearest neighbour <cit.>.
Noteworthy, multi-task learning is a closely related topic to transfer learning, but with different goals and interests.
Multi-task learning method integrates multiple learning tasks simultaneously, while exploiting a shared structure across all tasks.
In contrast, the interest of transfer learning is to learn the target data only by transferring some shared knowledge from the source data.
Therefore, learning the source data is not the focus of transfer learning.
We also clarify the difference in Section <ref>.
In this paper, our contributions include:
* We propose a novel unified transfer learning model by redefining the design matrix and the response vector in the context of the high-dimensional linear regression with a flexible penalty function.
When the transferring set is known, the theoretical results show that it attains tighter upper bounds of the ℓ_1/ℓ_2 estimation errors than Lasso using the target data only.
We also show that our bound is tighter than that in <cit.> when the transferring level h≳(log p/n)^1/2.
* Detecting the transferable data, including transferable source data and transferable variables, is a major task in transfer learning.
Our unified model is able to automatically identify the transferable variables after model estimation.
To the best of our knowledge, this is the first work for identifying the transferable variables by the model's nature and the first work for detecting transferable source data by hypothesis testing.
§.§ Notation and organization
We denote scalars with unbolded letters (e.g., sample size n and dimensionality p), (random) vectors with boldface lowercase letters (e.g., y and β), and matrices with boldface capital letters (e.g., X).
Let {( X_k, y_k): X_k∈ℝ^n_k× p, y_k∈ℝ^n_k}_k=1^K' denote the multiple source data and let ( X_0, y_0) be the target data.
We use ⊤ to represent the transpose of vectors or matrices, such as x^⊤ and X^⊤.
For a p-dimensional vector x=(x_1, ⋯, x_p), the ℓ_q norm is x_q=(∑_i=1^p |x_i|^q)^1/q, the ℓ_0 norm is the number of non-zero elements, and the infinity (maximum) norm is x_∞=max_i |x_i|.
We write x_i_ψ_1 as the sub-exponential norm for independent centered sub-exponential random variable x_i, i=1,⋯,n.
|ℳ| denotes the cardinality of the set ℳ.
Let s be the sparsity level of a vector.
A set with superscript c denotes its complement, for example, 𝒜_h^c.
Let a ∨ b and a ∧ b denote the maximum and minimum between a and b, respectively.
We use letters C and c with different subscriptions to denote the positive and absolute constants.
Let a_n=O(b_n) denote |a_n/b_n|≤ c for some constant c when n is large enough.
Let a_n=O_P(b_n) and a_n≲ b_n denote P(|a_n/b_n|≤ c)→ 1 for c<∞.
Let a_n=o_P(b_n) denote P(|a_n/b_n|> c)→ 0 for c>0.
Finally, a_n≍ b_n means that a_n/b_n converges to some positive constant.
The paper is organized as follows. Section <ref> introduces the proposed models by redefining the response vector and the design matrix, and studies the associated theoretical properties.
In Section <ref>, we propose to detect transferable data by utilizing high-dimensional hypothesis testing.
In Section <ref>, we compare the proposed algorithm to the existing algorithms with various simulations, including the cases of known and unknown 𝒜_h.
Section <ref> applies the proposed algorithm to the healthcare analysis of COVID data to predict the death rate using the county-levels characteristics.
We also investigate the transferabilities between the source data and the target data via a network analysis.
The proofs of main theorems and additional data are included in the Appendix.
§ UNIFIED TRANSFER LEARNING MODELS
Throughout the following sections,
we abbreviate 𝒜_h by 𝒜 for simplicity and use K to denote the number of transferable source data.
The first step (namely, transferring step) of the transfer learning models for high-dimensional linear regression <cit.>
is essentially equivalent to stacking all source data, assuming 𝒜 is known:
ŵ= w ∈ℝ^pmin{1/2n_𝒜 y'- X' w_2^2+λ w_1},
where y'=[ y_1^⊤,⋯, y_K^⊤]^⊤, X'=[ X_1^⊤,⋯, X_K^⊤]^⊤, and n_𝒜 is the total sample size of the source data.
<cit.> proposes to stack the source data and the target data in the GLMs in the transferring step.
We call these methods as vertical stacking methods.
The assumption behind these methods is that the data (the source data in 𝒜 or the target and the source data in 𝒜) share a similar coefficient w.
Stacking the data together in the way of Eq. (<ref>) may produce a better estimation when different data are close, but might be insufficient to identify the transferable variables in the source data.
For example, we are unable to identify the transferable variables to the target data for the k-th source data.
Therefore, we consider a new approach, unified transfer learning models, for transfer learning in the high-dimensional linear regression in this section.
§.§ 𝒜-UTrans: transfer learning with known 𝒜
Instead of stacking the target data and the source data in 𝒜 vertically, we propose to stack them both vertically and horizontally by
[[ y_1; ⋮; y_K; y_0 ]]=[[ X_1; 0; ⋮; 0 ]](β_1-β_0)+[[ 0; X_2; ⋮; 0 ]](β_2-β_0)+⋯+[[ X_1; ⋮; X_K; X_0 ]] β_0+[[ ϵ_1; ⋮; ϵ_K; ϵ_0 ]].
The aforementioned model can be written as y= Xβ+ϵ, where
y=[ y_1^⊤,⋯, y_K^⊤, y_0^⊤]^⊤, β=[(β_1-β_0)^⊤, (β_2-β_0)^⊤, ⋯, β_0^⊤]^⊤∈ℝ^p^*, ϵ=[ϵ_1^⊤, ⋯, ϵ_K^⊤, ϵ_0^⊤]^⊤, and
X=[ X_1 0 ⋯ ⋯ 0 X_1; 0 X_2 0 ⋯ 0 X_2; ⋮ ⋮ ⋮ ⋮ ⋮ ⋮; 0 ⋯ ⋯ ⋯ X_K X_K; 0 ⋯ ⋯ ⋯ 0 X_0; ]∈ℝ^(n_𝒜+n_0)× p^*,
where p^*=Kp+p and 𝒜={k:1≤ k≤ K}.
In this paper, we assume that K is fixed.
We consider a more general penalty function that includes the Lasso used in the current literature and other regularizers to deal with the high-dimensional data.
The loss function of the penalized least square is
ℒ_n(β)=1/2(n_𝒜+n_0)𝐲-𝐗β_2^2+P_λ(β)
and we denote this unified transfer learning model as 𝒜-UTrans and summarize the details in Algorithm <ref>.
We clarify the reason of formulating the problem into transfer learning rather than multi-task learning.
When we have knowledge (prior knowledge or source detection) about the transferable source data, the intuitive way is to combine the transferable source data and the target data, which is analogous to multi-task learning.
Steps 2 and 3 in Algorithm <ref> are to obtain a good estimator in a multi-task fashion, similar to <cit.>.
Our ultimate goal is to improve the estimation and prediction of the target data, which is the main theme of transfer learning.
Step 4 transfers the knowledge (β_0) from the source data to make estimation and prediction for the data of interest (target data).
Therefore, it fits the theme of transfer learning more appropriately.
The penalty function P_λ(β)=∑_j=1^p^* p_λ(|β_j|) satisfies the following conditions
* P_λ(0)=0 and P_λ(t) is symmetric around 0.
* P_λ(t) is differentiable for t ≠ 0 and lim _t → 0^+ P_λ^'(t)=λ L.
* P_λ(t) is a non-decreasing function on t ∈[0, ∞).
* P_λ(t) / t is a non-increasing function on t ∈(0, ∞).
* There exists τ>0 such that P_λ(t)+τ/2 t^2 is convex.
Conditions (i)–(iii) are relatively mild and used in <cit.>.
Condition (iv) makes sure that the bound of error β̂-β_2 is vanishingly small.
These mild conditions on P_λ(β) are commonly satisfied by many regularizers including Lasso <cit.>, SCAD <cit.>, and MCP <cit.>.
For more details, refer to <cit.>.
We argue two benefits of our 𝒜-UTrans models.
First, the unified transfer learning model explicitly writes the contrasts β_k-β_0.
The k-th source data are transferable if β_k-β_0= 0.
This method, therefore, provides an opportunity to detect transferable source by testing β_k-β_0= 0.
In Section <ref>, we propose to use hypothesis testing to detect transferable source data.
Second, other than detecting the transferable source data, our method can also detect the transferable variables in each source data.
For example, we obtain the set containing the transferable variables in the k-th source data by 𝒯_k={j: β_kj-β_0j=0, 1≤ j≤ p}.
§.§ Theoretical properties of 𝒜-UTrans
We define the parameter space of 𝒜-UTrans by
Θ_1(s,h)={β: β_0_0≤ s, max_k∈𝒜β_k-β_0_1≤ h}.
Note that this parameter space specifies the sparsity of β_0 and constraints the maximum ℓ_1 distance between β_k and β_0 to h.
We further impose the following conditions to study the theories of 𝒜-UTrans:
* C1. Each row of X_k, k∈𝒜∪{0}, is independent and identically distributed (i.i.d) normal random vector with mean zero and covariance matrix Σ_k.
The largest eigenvalue of Σ_k, Λ_max(Σ_k), is bounded above.
* C2. The random noises ϵ_ki in the k-th source data, i=1, ⋯, n_k and k∈𝒜∪{0}, are i.i.d sub-Gaussian random variable with mean zero and parameter σ_k^2.
* C3. The sample covariance matrix
Σ_k=1/n_k X_k^⊤ X_k, k∈𝒜∪{0}, satisfies the restricted strong convexity (RSC) condition
Δ_k^⊤Σ_k Δ_k ≥ v_kΔ_k_2^2-τ_k √(log p/n_k)Δ_k_1 for Δ_k ∈ℝ^p and Δ_k_1≥ 1,
where v_k>0 and τ_k≥ 0.
In C1, the source data and the target data are assumed to have Gaussian designs.
The covariance matrix Σ_k can be homogeneous or heterogeneous among the source and the target data.
Different from <cit.> whose theories are established separately with the homogeneous and heterogeneous covariance matrices, our theories can incorporate both due to the unified structure of X.
This condition is for theoretical convenience and can be relaxed to sub-Gaussian random variable.
We also conduct simulation studies when data are generated from other distributions (see Appendix B).
Condition C2 assumes the sub-Gaussian random noises for the source and the target data, which are used for the convergence rate analysis.
Condition C3 assumes the RSC condition for each sample covariance matrix.
This condition is widely used to study the non-asymptotic error bounds in high-dimensional statistics.
It is shown that the RSC condition is met with high probability under sub-Gaussian assumption <cit.>.
We mention that the RSC condition can be replaced by the restricted eigenvalue (RE) condition <cit.>.
For simplicity, denote n=n_𝒜+n_0.
We have the following RSC condition on the sample covariance matrix of X.
Let Σ= X^⊤ X/n be the sample covariance matrix of X.
With the RSC conditions on each Σ_k and the assumption that n_k/n is bounded below by a constant, k∈𝒜∪{0}, we have
Δ^⊤ΣΔ≥ v'Δ_2^2-τ_0 (√(n_mlog p/n^2)+√(n_0log p/n^2))Δ_1 for Δ=β-β∈ℝ^p^*,
where v'>0, τ_0≥ 0, and n_m=max_k∈𝒜n_k.
Theorem <ref> implies that the sample covariance matrix Σ in the unified model admits a similar RSC condition as that from a single source data.
The term √(n_mlog p/n^2)+√(n_0log p/n^2)≲√(log p/n) in the lower bound is essential for establishing the estimation error bound.
Note that this term is upper bounded by √(log p/n_0).
Thus, a tighter error bound than the model using target data only can be established.
From this theorem, we observe
Δ^⊤ΣΔ≥ v'Δ_2^2-2τ_0 √(log p/n)Δ_1,
which trivially holds for Δ_1/Δ_2^2≥v'/2τ_0√(n/log p) since the left-hand side is nonnegative.
Thus, we only enforce a type of strong convexity condition over a cone of the form {Δ_1/Δ_2^2≤v'/2τ_0√(n/log p)}.
Based on Theorem <ref>, we have the following ℓ_1/ℓ_2 estimation error bounds.
With the conditions on the regularizer P_λ(β) and conditions C1–C3, let λ=c_1√(log p/n) for a positive constant c_1.
Suppose 𝒜 is known with h≲ s(log p/n_𝒜)^1/2 and (slog p/n)^1/2+h^1/2(log p/n)^1/4=o(1),
then there exists some positive constant c such that
β_0-β_0_2≲(slog p/n)^1/2+(log p/n)^1/4h^1/2
and
β_0-β_0_1≲ s(log p/n)^1/2+(log p/n)^1/4(sh)^1/2
hold with probabilities at least 1-cp^-1,
where s=β_0_0 is the sparsity level and h=max_k∈𝒜β_k-β_0_1.
Theorem <ref> shows how the estimation errors of β_0 are affected by n_𝒜, n_0, s, p, and h.
The ℓ_1 error can be analyzed similarly to the ℓ_2 error, so we only analyze the ℓ_2 error here.
First, we require a vanishing h as n_𝒜 increases via the condition h≲ s(log p/n_𝒜)^1/2.
When n_𝒜 is extremely large, h becomes small enough to reduce the bias from the source data.
Particularly, h=0 implies that the source data are completely transferable to the target data (β_k=β_0).
In this case, the ℓ_2 error becomes O_P(√(slog p/n)), the convergence rate of stacking all data vertically.
Second, without any available source data (n_𝒜=0 and h=0), the ℓ_2 upper bound becomes √(slog p/n_0), the same rate as Lasso on target data only.
Third, Theorem <ref> holds with the condition slog p=o(n_𝒜) when n_0≲ n_𝒜, which is weaker than the condition slog p=o(n_0) for Lasso using the target data only.
Fourth, the ℓ_2 error bound of 𝒜-Trans-GLM (Theorem 1 of <cit.>) is (slog p/n)^1/2+[(log p/n_0)^1/4h^1/2]∧ h.
It is not hard to see that ours is the same as 𝒜-Trans-GLM when h≲(log p/n)^1/2 and tighter than that when h≳(log p/n)^1/2.
We next derive the prediction error bound of 𝒜-UTrans.
Let ℰ_n_v=1/n_v X_v(β_0-β_0)_2^2 be the mean squared prediction error based on testing data X_v.
With the same conditions in Theorem <ref> and some positive constant c,
ℰ_n_v≲slog p/n_v+(log p/n_v)^3/4(sh)^1/2+h(log p/n_v)^1/2
holds with probability at least 1-cp^-1,
where X_v is the testing data and n_v is the corresponding testing data size.
§ UTRANS: TRANSFER LEARNING WITH SOURCE DETECTION
The 𝒜-UTrans algorithm in Section <ref> assumes that the source data and the target data are similar to some extent, which might be unrealistic for an arbitrary dataset since h can be small or large.
In fact, transferring nontransferable source data to the target data may bring adverse effects and lead to worse performance than the model with target data only <cit.>.
Therefore, a source detection algorithm is necessary in transfer learning.
Recall that our unified model, with X_k and X_0, explicitly writes out the contrast β_k-β_0 with
μ=[ X_k X_k; 0 X_0; ][ β_k-β_0; β_0; ]:= W (β_k-β_0)+ Z β_0
where μ=E(Y| Z= z, W= w), W=( X_k^⊤, 0)^⊤, and Z=( X_k^⊤, X_0^⊤)^⊤.
Let β=[(β_k-β_0)^⊤, β_0^⊤]^⊤ (note that β is defined differently from that in Section <ref>).
By testing H_0: β_k-β_0= 0 vs H_1: β_k-β_0≠ 0, we detect if the source data X_k are transferable to X_0.
Both the parameter of interest β_k-β_0 and the nuisance parameter β_0 are p-dimensional.
Methods on testing the high-dimensional vector with high-dimensional nuisance parameter is very limited in the literature. Recently, <cit.> proposes a U test statistic for the high-dimensional regression models, which extends the results of testing the low-dimensional parameter of interest in <cit.> and <cit.>.
Motivated by these works, we propose an asymptotic α-level test that rejects H_0 if
|Û_n_k|/√(2R̂_n_k)>z_1-α/2
where z_1-α/2 is the (1-α/2)-th quantile of a standard normal distribution and
Û_n_k=1/n_k∑_i ≠ i'^n_k{(y_i-μ̂_∅ i)(y_i'-μ̂_∅ i') x_ki^⊤ x_ki'}
R̂_n_k=1/n_k^2-n_k∑_i ≠ i'^n_k{(y_i-μ̂_∅ i)^2(y_i'-μ̂_∅ i')^2( x_ki^⊤ x_ki')^2},
μ̂_∅ i= z_i^⊤β_0 where β_0 is obtained by fitting μ= z^⊤β_0 under the null hypothesis.
Note that z is high-dimensional, so we may obtain β_0 with the Lasso penalty or the SCAD penalty <cit.>, which generally has a good prediction in the high-dimensional data and takes the following form
P(β_0j)= λ|β_0j| if |β_0j| ≤λ,
2 γλ|β_0j|-β_0j^2-λ^2/2(γ-1) if λ<|β_0j|<γλ
λ^2(γ+1)/2 if |β_0j| ≥γλ
and we take γ=3.7.
Denote Λ_W^ϵ=tr[E{var(ϵ) x_k x_k^⊤}^2] where ϵ= y_k- x_k^⊤β_k.
Assume
* C4. Under H_0, there exist finite positive constants c_1 and C_1 such that
c_1≤λ_min{E( X_k X_k^⊤)}≤λ_max{E( X_k X_k^⊤)}≤ C_1,
where λ_min and λ_max denote the smallest and largest eigenvalues of E( X_k X_k^⊤), respectively.
Assume the conditions C1–C2 and C4 and slog p/n=o(1).
Under H_0, if n_k slog p/(n √(2Λ_W^ϵ))=o(1), then
lim_n_k→∞sup_β_0_2=O(1)P(|Û_n_k|/√(2R̂_n_k)>z_1-α/2)=α.
Theorem <ref> shows the probability of making the type I error (incorrectly excluding X_k when it is transferable).
Under some conditions, we find that the probability of making such error becomes small as n_k→∞.
Let 𝒜 be the estimated transferring set from Algorithm <ref>.
With conditions in Theorem <ref>,
for any δ>0, there exists n_k such that
P(𝒜=𝒜)≥ 1-δ.
Algorithm UTrans
utilizes the tool of hypothesis testing to detect transferable source data.
To the best of our knowledge, this is the first work of using statistical inference for source detection in transfer learning.
We point out the benefit of our source detection algorithm.
Compared to Trans-GLM <cit.> which depends on the unknown constant C_0, our algorithm has no extra unknown parameters.
In fact, C_0 determines the threshold to select the transferable source data.
Without knowing the true value of C_0, a large value overestimates 𝒜 and a small value underestimates 𝒜.
Another round of cross validation can be run to find C_0, but increases computational cost.
Nevertheless, our algorithm estimates 𝒜 by directly testing β_k-β_0= 0, which is more computationally efficient.
We compare our hypothesis testing method to Trans-GLM in Section <ref>.
§ SIMULATION
We illustrate the performances of our 𝒜-UTrans and UTrans in various settings in terms of the ℓ_2-estimation error and the mean squared prediction error.
More specifically, we compare the following models:
* 𝒜-Trans-GLM and Trans-GLM: a two-step transferring model for linear regression without and with transferability detection, respectively, proposed by <cit.>.
* naive-Lasso: a model that fits the target data only using Lasso regression <cit.>.
* 𝒜-UTrans-Lasso: the proposed unified transfer learning model with Lasso regularization, i.e., P(β)=∑_j=1^p^*|β_j|.
* 𝒜-UTrans-SCAD: the proposed unified transfer learning model with SCAD regularization <cit.>.
R packages glmtrans and glmnet are used to implement Trans-GLM and Lasso on the target data only, respectively <cit.>.
Performances among the four models are compared in the subsections <ref>, <ref>, and the Appendix B.
More concretely, subsection <ref> compares the aforementioned four models when 𝒜 is known and data are simulated from the normal distribution with autocorrelated covariance structure.
Subsection <ref> compares the transferability detection abilities of UTrans and Trans-GLM.
Appendix B shows the simulation results when the source data are from normal distribution with compound symmetry covariance structure, t-distribution, and Gaussian mixture model.
We consider 10 different settings for the number of source data, i.e., K and K' range from 1 to 10.
With each K and K', experiments are replicated 200 times.
§.§ Simulation with known 𝒜
This subsection is to show the theoretical properties in Theorem <ref> and the advantages of our 𝒜-UTrans algorithms in high-dimensional transfer learning with different dimensionalities p, target sizes n_0, and transferring levels h.
We consider simulations with n_0∈{50, 75, 100}, p∈{300, 500, 600, 900}, and h∈{5, 10, 20, 40}.
We let the sample size of the source data n_k=100 for all k=1,⋯, K and fix the sparsity level s=5 in the target data.
For the target data, let β_0=(0.5_s, 0_p-s), where 0.5_s means s repetitions of 0.5 and 0_p-s means p-s repetitions of 0.
Each target sample x_0iiid∼𝒩( 0_p, Σ) with element Σ_jj'=0.5^|j-j'| for i=1,⋯,n_0 and 1≤ j≠ j'≤ p.
For the k-th source data, we let β_k=(0.5_s+(h/p)ℛ_s, 0_p-s), where ℛ_s is a s-dimensional independent Rademacher variable.
Each sample is generated from a p-dimensional 𝒩( 0_p, Σ+ϵϵ^⊤) with ϵ∼𝒩( 0_p, 0.3^2 I_p).
Figure <ref> depicts the mean squared prediction errors of all models with different simulation settings.
More specifically, row A shows the results under different dimensionalities p.
We fix n_0=100 and h=5.
First, our proposed 𝒜-UTrans-Lasso and 𝒜-UTrans-SCAD outperform all the others.
Second, the naive-Lasso model fluctuates with the highest error no matter how K increases, since K controls the number of source data and naive-Lasso fits the target data only.
Finally, as p increases, the MSPE of naive-Lasso increases while the other models stabilize the MSPEs.
Row B shows the MSPEs of all models with different target sizes n_0.
We fix p=500 and h=5.
First, our proposed 𝒜-UTrans algorithms have the best performances even with small sample sizes.
This evidence shows the benefit of transfer learning when the size of target data is small.
Second, the MSPEs of all models decrease as n_0 increases while 𝒜-UTrans-SCAD attains the lowest error.
Row C illustrates the MSPEs of all models with various h.
We fix n_0=100 and p=500.
As the level h increases, prediction errors of all transfer learning models with small K increase but they fluctuate as K increases.
Figure <ref> shows the averaged ℓ_2 estimation errors of the four methods.
More specifically, our 𝒜-UTrans algorithms obtain much lower errors than the others.
As K increases, the errors of 𝒜-UTrans-Lasso and 𝒜-UTrans-SCAD drop dramatically.
This further shows that our algorithms have lower errors than the two-step 𝒜-Trans-GLM.
The condition for improving the target model in <cit.> and <cit.> allows h as large as √(n_0/log p).
In other words, they make a better upper bound better than the naive Lasso under this condition.
With these three simulation settings, this condition is satisfied in most settings and therefore improvement or theoretical property is granted.
While, our 𝒜-UTrans still outperforms 𝒜-Trans-GLM.
Overall, this simulation study presents that our proposed 𝒜-UTrans maintains relatively low prediction errors in all settings.
Particularly, 𝒜-UTrans-SCAD outperforms all the others with relatively lower errors.
§.§ Simulation with unknown 𝒜
In subsection <ref>, we consider the cases when the values of h are relatively small.
Here, we consider the cases with relatively large h and examine the effectiveness of the source detection algorithms.
We fix p=500, K'=10, and the source data sizes n_k=200 for k∈𝒜.
The target data are simulated in the same way as subsection <ref>.
For the k-th source data, each sample is generated from a t-distribution with degrees of freedom 4 and the covariance Σ_jj'=0.5^|j-j'| for i=1,⋯,n_k and 1≤ j≠ j'≤ p.
Note that we violate the assumptions C1 and C2 to show the robustness of UTrans with different data distributions.
We let β_0=(-0.4_3, -0.5_3, 0.6_4, 0_490), β_k=β_0 if the k-th source data are transferable, and β_k=β_0+h ℛ_p otherwise.
Figure <ref> and Figure <ref> show the estimation and prediction errors from naive-Lasso, Trans-GLM, Trans-GLM*, UTrans, and UTrans*, respectively.
An algorithm with star denotes its pooled version, i.e., combining all the source data and target data.
The x-axis ka represents the number of transferable source data.
The first row of Figure <ref> shows the estimation results with different target sizes and we fix h=0.25.
The second row demonstrates the results with different values of h and we fix n_0=100.
When n_0=50, our algorithm UTrans obtains much lower estimation errors than Trans-GLM, which demonstrates the benefit of using transfer learning in the target data with relatively small size.
As h increases, our UTrans keeps the lowest estimation errors among all algorithms in most settings, which shows the effectiveness of excluding nontransferable source data.
Overall, this study reveals that our proposed UTrans works better than existing algorithms in small target data and noisy source data.
Similar patterns for the prediction errors can be observed in Figure <ref>.
To further explore why UTrans performs better than Trans-GLM in source detection, we compare the source detection efficiency in Table <ref>.
We consider the simulation setting in Section <ref> with h=0.2 and K'=5, i.e., the total number of transferable and nontransferable sources is 5.
We repeat the experiment 100 times and numbers in the Table <ref> show the proportions that the corresponding source is selected.
We notice that both Trans-GLM and UTrans have high probabilities selecting the transferable source data.
However, we also observe that Trans-GLM selects the nontransferable source data.
For example, Trans-GLM selects X_4 seven times when it should not have been selected.
This demonstrates that Trans-GLM over-selects the source data and explains that it may perform worse than UTrans due to the negative transfer.
§ INTERGENERATIONAL MOBILITY DATA
Historically, the American dream embodies the belief that the United States is the land of opportunity in each person according to ability and effort.
Intergenerational mobility, which measures the relationship between the socio-economic status of parents and the status of their children, is one of the key measures of the American Dream <cit.>.
Low mobility leads to unrealised human potential and misallocation of resources.
For example, it is an important concept in measuring the inequality of opportunity in sociology.
In economics, its presence is likely to be associated with an inefficient allocation of resources.
Having a good knowledge of mobility is very important for social fairness, social stability, and economic efficiency.
In this paper, our goal is to investigate how transfer learning helps the prediction of intergenerational mobility and how transfer learning detects the transferable states in the US.
§.§ Data description
We use the county-level data collected from the national census data, the Opportunity Atlas, and Data Commons to illustrate our UTrans.
Intergenerational mobility is measured as the change in income percentile for the children of all parents at the 75th national income percentile when they are aged 26.
Furthermore, we subset states with the numbers of counties larger than 50 for analysis.
Of which, states with the numbers of counties between 50 and 75 are treated as the target states while others larger than 75 are treated as the source states.
We add two-way interactions among the predictors.
Overall, the processed data contain 1803 counties and 7875 predictors.
The states of interest (target states) include Alabama (AL-66), Arkansas (AR-64), California (CA-52), Florida (FL-65), Louisiana (LA-58), Minnesota (MN-69), New York (NY-61), Oklahoma (OK-60), Pennsylvania (PA-64), and Wisconsin (WI-68).
The source states include Georgia (GA-127), Illinois (IL-88), Indiana (IN-87), Iowa (IA-81), Kentucky (KY-98), Michigan (MI-76), Missouri (MO-87), North Carolina (NC-95), Ohio (OH-88), Tennessee (TN-86), Texas (TX-150), and Virginia (VA-113).
Number in the brackets denotes sample size, i.e., the number of counties.
§.§ Predictive analysis
We compare our UTrans to the following algorithms: Trans-GLM, Trans-GLM*, random forest (RF), RF*, XGBoost, XGBoost*, support vector machine (SVM), SVM*, UTrans, and UTrans*, where * denotes the pooled version, i.e., stacking both the source data and the target data.
We repeat our experiment 200 times and
evaluate these algorithms by the mean squared prediction error.
When applying these algorithms, we treat one state as the target data.
To make predictions, we randomly split 80% of the target data as training data and the remaining 20% as testing data.
Table <ref> shows the mean squared prediction errors for each target state.
For each target state, the algorithm with the best performance is highlighted in bold.
Notably, UTrans performs the best in six states (CA, LA, MN, NY, OK, and WI).
Compared to XGBoost and SVM and their pooled version, UTrans still maintains relatively low prediction errors.
Compared to RF and RF*, which have the lowest errors in 4 states, our UTrans is also more interpretable in terms of variable importance.
The coefficients in linear regression represent the strength and direction of the relationship. RF is generally more difficult to interpret than linear regression since the individual trees can interact in complex ways and the importance of each feature may not be easily discernible from the output.
Compared to the interpretable Trans-GLM, UTrans performs better than it in all the target states.
In short, this study shows the superior performance of UTrans in prediction while preserving model's interpretation.
§ DISCUSSION
In this paper, we study a unified approach for transfer learning problems in the context of high-dimensional linear regression models.
To the best of our knowledge, this is the first work on transfer learning that identifies both transferable variables and transferable source data;
It is also the first work that incorporates the statistical inference tool into transfer learning for source detection.
In terms of future work, there are multiple research directions based on our framework.
First, our unified model can be extended to the nonlinear models, such as generalized linear models, survival models, etc.
The challenge could be the construction of the restricted strong convexity condition in the nonlinear models, which is essential for establishing the error bounds.
Second, it is interesting to conduct statistical inference since our unified model explicitly writes the contrasts β_k-β_0.
By testing β_k-β_0= 0, we test whether the k-th source data are transferable to the target data.
In a more broad sense, this kind of testing problems also sheds a light on comparing coefficients of different high-dimensional regression models in a unified approach.
However, the challenge is the high-dimensional nature of β_k-β_0.
That is, testing a high-dimensional subvector in the high-dimensional model remains a challenging problem.
§ APPENDIX
The appendix contains technical proofs in Appendix A and additional simulation results in Appendix B.
§.§ Appendix A: Technical proof
Let x_1,⋯,x_n be independent centered sub-exponential random variables, and let M=max_ix_i_ψ_1. Then, for every a=(a_1,⋯,a_n)^⊤∈ℝ^n and every t≥ 0, we have
P(‖∑_i=1^na_ix_i‖≥ t)≤ 2exp[-cmin(t^2/M^2 a_2^2,t/M a_∞)],
where c>0 is an absolute constant.
With the regularization function P_λ satisfying the conditions (i)–(v),
* For any w, we have λ L w_1≤ P_λ( w)+τ/2 w_2^2
* Let ℐ be the index set of the s^* largest elements of v in magnitude.
Suppose ξ>0 is such that ξ P_λ( v_ℐ)- P_λ( v_ℐ^c)≥0, then
ξ P_λ( v_ℐ)- P_λ( v_ℐ^c)≤λ L(ξ v_ℐ_1- v_ℐ^c_1).
Moreover, if β^* is s^*-sparse, then for an vector β such that ξ P_λ(β^*)-P_λ(β)>0 and ξ≥ 1, we have
ξ P_λ(β^*)-P_λ(β)≤λ L(ξ v_ℐ_1- v_ℐ^c_1)
where v=β-β^*.
4mm
Proof of Theorem <ref>
Denote n=n_𝒜+n_0.
First, it is not hard to derive
Σ=1/n X^⊤ X =1/n[ X^⊤_1 0 ⋯ ⋯ 0 0; 0 X^⊤_2 0 ⋯ 0 0; ⋮ ⋮ ⋮ ⋮ ⋮ ⋮; 0 ⋯ ⋯ ⋯ X^⊤_K 0; X^⊤_1 X^⊤_2 ⋯ ⋯ X^⊤_K X^⊤_0 ][ X_1 0 ⋯ ⋯ 0 X_1; 0 X_2 0 ⋯ 0 X_2; ⋮ ⋮ ⋮ ⋮ ⋮ ⋮; 0 ⋯ ⋯ ⋯ X_K X_K; 0 ⋯ ⋯ ⋯ 0 X_0 ]
=1/n[ n_1Σ_1 0 ⋯ ⋯ 0 n_1Σ_1; 0 n_2Σ_2 0 ⋯ 0 n_2Σ_2; ⋮ ⋮ ⋮ ⋮ ⋮ ⋮; 0 ⋯ ⋯ ⋯ n_KΣ_K n_KΣ_K; n_1Σ_1 ⋯ ⋯ ⋯ n_KΣ_K ∑_k∈𝒜∪{0}n_kΣ_k; ].
For any Δ=(Δ_1^⊤,⋯,Δ_K^⊤,Δ_0^⊤)^⊤, we have
Δ^⊤ΣΔ=[ n_1/nΔ_1^⊤Σ_1+n_1/nΔ_0^⊤Σ_1; ⋮; n_K/nΔ_K^⊤Σ_K+n_K/nΔ_0^⊤Σ_K; ∑_k∈𝒜 n_k/nΔ_k^⊤Σ_k+∑_k∈𝒜∪{0} n_k/nΔ_0^⊤Σ_k ]^⊤[ Δ_1; ⋮; Δ_K; Δ_0; ]
=∑_k∈𝒜n_k/n{Δ_k^⊤Σ_kΔ_k+2Δ_0^⊤Σ_kΔ_k+Δ_0^⊤Σ_kΔ_0 }+n_0/nΔ_0^⊤Σ_0Δ_0
=∑_k∈𝒜1/n‖ X_kΔ_k+ X_kΔ_0‖_2^2+n_0/nΔ_0^⊤Σ_0Δ_0
= ∑_k∈𝒜n_k/n(Δ_0+Δ_k)^⊤Σ_k(Δ_0+Δ_k)+n_0/nΔ_0^⊤Σ_0Δ_0
≥∑_k∈𝒜(v”‖Δ_k+Δ_0‖_2^2-τ√(n_klog p/n^2)‖Δ_k+Δ_0‖_1)+v'_0Δ_0_2^2-τ_0 √(n_0log p/n^2)Δ_0_1
with strictly positive constants v” and v'_0,
where the last inequality follows the RSC conditions on Σ_k and the assumption that n_k/n is bounded below by a constant, k∈𝒜∪{0}.
In the context of our model, we replace Δ by Δ=β-β.
Let v”=v(min_k n_k)/n, v'=v”/2, and v'_0=(2K+1)v', then we observe
v'Δ_2^2 =v'∑_k∈𝒜Δ_k^2+v'β_0-β_0^2
= v'∑_k∈𝒜Δ_k+Δ_0-Δ_0^2+v'Δ_0^2
≤ 2v'∑_k∈𝒜(Δ_k+Δ_0^2+Δ_0^2)+v'Δ_0^2
= v”∑_k∈𝒜Δ_k+Δ_0_2^2+v'_0Δ_0_2^2.
Let τ_k=τ for k∈𝒜 and τ_0=τ(K+1).
Then, we can also derive
τ∑_k∈𝒜√(n_klog p/n^2)Δ_k+Δ_0_1≤τ∑_k∈𝒜√(n_m log p/n^2)(Δ_k_1+Δ_0_1)
= τ√(n_mlog p/n^2)Δ_1+τ√(n_mlog p/n^2)(K-1) Δ_0_1
≤ τ√(n_mlog p/n^2)Δ_1+τ K √(n_mlog p/n^2)Δ_1=τ_0 √(n_mlog p/n^2)Δ_1
τ√(n_0log p/n^2)Δ_0_1≤τ_0√(n_0log p/n^2)Δ_1.
Finally, combining inequalities (<ref>), (<ref>), and (<ref>), we have
Δ^⊤ΣΔ≥ v'Δ_2^2-τ_0 (√(n_mlog p/n^2)+√(n_0log p/n^2))Δ_1 for Δ∈ℝ^p^* and Δ_1≥ 1.
According to Lemma 10 of <cit.>, the aforementioned inequality with Δ_1≥ 1 actually implies
Δ^⊤ΣΔ≥ v'Δ_2^2-τ_0 (√(n_mlog p/n^2)+√(n_0log p/n^2))Δ_1 for Δ∈ℝ^p^*
for some constants v'>0 and τ_0≥ 0.
▪
5mm
Proof of Theorem <ref>
First, the regularized loss function (<ref>) can be rewritten as
1/2β^⊤Σβ-1/n y^⊤ Xβ+P_λ(β).
Let Δ=β-β. The first-order condition implies that for any solution β in the interior of the constraint set, Σβ-1/n X^⊤ y +∇ P_λ(β)= 0 and therefore
Δ^⊤Σβ+⟨∇ P_λ(β)-1/n X^⊤ y, Δ⟩=0.
For simplicity, we use τ for τ_0.
The RSC condition on each Σ_k from Theorem <ref> implies
Δ^⊤ΣΔ≥ v'Δ_2^2-τ(√(n_mlog p/n^2)+√(n_0log p/n^2))Δ_1.
Subtracting (<ref>) from (<ref>), we have
-Δ^⊤Σβ-⟨∇ P_λ(β)-1/n X^⊤ y, Δ⟩≥ v'Δ_2^2-τ(√(n_mlog p/n^2)+√(n_0log p/n^2))Δ_1.
Since the function P_τ, λ( w)=P_λ( w)+τ/2 w_2^2 is convex <cit.>,
-⟨∇ P_λ(β), Δ⟩≤ P_λ(β)-P_λ(β)+τ/2Δ_2^2.
Combining (<ref>) and (<ref>), we have
v'Δ_2^2-τ(√(n_mlog p/n^2)+√(n_0log p/n^2))Δ_1
≤ -Δ^⊤Σβ+1/n X^⊤ y Δ+P_λ(β)-P_λ(β)+τ/2Δ_2^2
v'Δ_2^2-τ/2Δ_2^2 ≤ P_λ(β)-P_λ(β)+(‖Σβ-1/n X^⊤ y‖_∞)Δ_1
+τ(√(n_mlog p/n^2)+√(n_0log p/n^2))Δ_1
v'Δ_2^2-τ/2Δ_2^2
≤ P_λ(β)-P_λ(β)+{‖Σβ-1/n X^⊤ y‖_∞+τ(√(n_mlog p/n^2)+√(n_0log p/n^2))}Δ_1
Next, we only need to bound Σβ-1/n X^⊤ y_∞. Note that
‖Σβ-1/n X^⊤ y‖_∞=‖Σβ-1/n X^⊤( Xβ+ϵ)‖_∞
= ‖1/n X^⊤ϵ‖_∞=‖2/n∑_k∈𝒜 X_k^⊤ϵ_k+1/n X_0^⊤ϵ_0 ‖_∞
≤ ‖2/n∑_k∈𝒜 X_k^⊤ϵ_k‖_∞+‖1/n X_0^⊤ϵ_0 ‖_∞
≤ c_1 √(n_𝒜log p/n^2)+c_2 √(n_0log p/n^2)
for some constants c_1 and c_2 with probability at least 1-4p^-1.
The last inequality follows the fact that the product of sub-Gaussian random variables is a sub-exponential random variable.
Therefore, x_ijϵ_i is sub-exponential according to condition C2.
Using Lemma <ref> with a=[1,⋯,1]^⊤, we have
P(2/n‖∑_k∈𝒜 X_k^⊤ϵ_k‖_∞>t)≤ 2p max_j≤ p, k∈𝒜exp{-c min(n^2 t^2/4M_k^2 n_𝒜,nt/2M_k)},
where M_k=max_1≤ i≤ n_kx_ki_ψ_1.
With log p=o(n_𝒜) and t=c_1 √(n_𝒜log p/n^2), we have
P(2/n‖∑_k∈𝒜 X_k^⊤ϵ_k‖_∞≤ c_1 √(n_𝒜log p /n^2))≥ 1-2p^-1
for some constant c_1.
Similarly, we have
P(1/n‖ X_0^⊤ϵ_0‖_∞≤ c_2 √(n_0log p/n^2))≥ 1-2p^-1
for some constant c_2.
The last inequality follows by combining the aforementioned two inequalities such that
‖Σβ-1/n X^⊤ y‖_∞≤ c_1 √(n_𝒜log p/n^2)+c_2 √(n_0log p/n^2)≍√(log p/n)
with probability at least 1-4p^-1.
Then
‖Σβ-1/n X^⊤ y‖_∞+τ(√(n_mlog p/n^2)+√(n_0log p/n^2)) ≤ c_1 √(log p/n),
for large enough c_1.
Let λ= 2c_1 √(log p/n), we have
v'Δ_2^2-τ/2Δ_2^2 ≤ P_λ(β)-P_λ(β)+λ/2Δ_1
≤ P_λ(β)-P_λ(β)+1/2P_λ(Δ)+τ/4 Δ_2^2
≤ P_λ(β)-P_λ(β)+1/2P_λ(β)+1/2 P_λ(β)+τ/4 Δ_2^2,
where the second inequality follows Lemma <ref>.
With the second inequality in Lemma <ref>, we finally have
2v'Δ_2^2-3τ/2Δ_2^2≤ 3 λ LΔ_ℐ_1-λ L Δ_ℐ^c_1.
Besides,
Δ_ℐ^c_1=∑_k∈𝒜[β_k-β_0-(β_k-β_0)]_ℐ^c_1+(β_0-β_0)_ℐ^c_1
≥ ∑_k∈𝒜(β_k-β_0)_ℐ^c_1-∑_k∈𝒜(β_k-β_0)_ℐ^c_1+(β_0-β_0)_ℐ^c_1
≥ ∑_k∈𝒜(β_k-β_0)_ℐ^c_1-Kh+(β_0-β_0)_ℐ^c_1,
which implies
-λ LΔ_ℐ^c_1≤ -λ L∑_k∈𝒜(β_k-β_0)_ℐ^c_1+λ LKh-λ L(β_0-β_0)_ℐ^c_1.
With Theorem <ref>, Eq. (<ref>), and Eq. (<ref>), we obtain
2v'Δ_2^2-3τ/2Δ_2^2≤ 3 λ LΔ_ℐ_1-λ L Δ_ℐ^c_1
≤ 3 λ LΔ_ℐ_1+λ LKh
≤ 3 λ L∑_k∈𝒜(Δ_k)_ℐ_1+3 λ LΔ_0_1+λ LKh
≤ 3λ Lv'/2τ√(n/log p)Δ_2^2+3λ L√(s)Δ_2+λ LKh.
Let a=2v'-3τ/2 for simplicity.
With the choice of λ=c_1 √(log p/n) and c'=3c_1 Lv'/(2τ), we have
(a-c')Δ_2^2≤ 3λ L√(s)Δ_2+ λ LKh
with a-c'> 0.
Let x=Δ_2, then we solve the quadratic inequality (a-c')x^2-3λ L√(s) x-λ LKh≤ 0 and we have
Δ_2≲λ√(s)+√(λ h).
Plugging in the choice of λ, we have
Δ_2
≲√(slog p/n)+(log p/n)^1/4√(h).
Since β_0-β_0 is a subset of Δ, this result also holds for β_0-β_0_2, i.e.,
β_0-β_0_2 ≲√(slog p/n)+(log p/n)^1/4√(h).
Immediately from the ℓ_2 error of β_0-β_0_2, we have
β_0-β_0_1≲ s√(log p/n)+(log p/n)^1/4√(sh).
▪
5mm
Proof of Theorem <ref>
For simplicity, we drop the subscript v in the testing data ( X_v, y_v).
Let ℒ_n(β)=1/2n y- Xβ_2^2 and Δ_0=β_0-β_0, then the prediction error is
⟨∇ℒ_n(β_0)-∇ℒ_n(β_0), Δ_0⟩=1/n X(β_0-β_0)_2^2=(β_0-β_0)^⊤Σ(β_0-β_0)=Δ_0^⊤ΣΔ_0.
Assume the RSC condition on the test data such that
Δ^⊤ΣΔ≥ vΔ_2^2-τ√(log p/n)Δ_1 for any Δ∈ℝ^p.
Similar to the proof of Theorem <ref>, we have
-⟨∇ P_λ(β_0),Δ_0⟩≤ P_λ(β_0)-P_λ(β_0)+τ/2Δ_0_2^2.
The first-order condition implies
⟨∇ℒ_n(β_0)+∇ P_λ(β_0),-Δ_0⟩≥ 0.
Therefore, the prediction error
⟨∇ℒ_n(β_0)-∇ℒ_n(β_0), Δ_0⟩≤⟨-∇ℒ_n(β_0)-∇ P_λ(β_0),Δ_0⟩
≤ P_λ(β_0)-P_λ(β_0)+τ/2Δ_0_2^2+∇ℒ_n(β_0)_∞Δ_0_1.
Let ℳ be the support set of β, i.e., ℳ={j: β_j≠ 0}.
Next, we bound P_λ(β_0)-P_λ(β_0) by
P_λ(β_0)-P_λ(β_0)=P_λ(β_0)-P_λ(β_0ℳ)-P_λ(β_0ℳ^c)
≤ P_λ(Δ_0ℳ)-P_λ(β_0ℳ^c)
= P_λ(Δ_0ℳ)-P_λ(Δ_0ℳ^c)
≤ λ L(Δ_0ℳ_1-Δ_0ℳ^c_1)
≤ λ LΔ_0_1.
Together with the result ∇ℒ_n(β_0)_∞≲λ (from the proof of Theorem <ref> or <cit.>), we have
⟨∇ℒ_n(β_0)-∇ℒ_n(β_0), Δ_0⟩
≲ λ L Δ_0_1+τ/2Δ_0_2^2+λΔ_0_1
≲ λ√(s)Δ_0_2+Δ_0_2^2.
The result follows by plugging in the ℓ_2 error bound in Theorem <ref> such that
1/n X(β_0-β_0)_2^2≲slog p/n+(log p/n)^3/4√(sh)+h√(log p/n).
▪
For Bernoulli random variables x_1,⋯,x_n with P(x_i=1)=p_i, we have
P(∑_i=1^n x_i≥n+1/2)≥∑_i p_i/n.
Proof of Theorem <ref>
Explicit forms of Υ_1^(k), Γ_1^(k), and Γ_2^(k).
Let w_k be the coefficient for the k-th source model.
The following forms can be found in Proposition 1 of <cit.>:
Γ_1^(0)=√(s log p/n_0)β_0_2, Γ_2^(0)=(β_0_2^2 ∨β_0_2) / √(n_0),
Υ_1^(k)=Ω_k ·[ w_k_2 𝕀(k ∈𝒜)+β_k_2 𝕀(k ∈𝒜^c)],
Γ_1^(k)=√(1/n_0)Ω_k ·[ w_k_2 𝕀(k ∈𝒜)+β_k_2 𝕀(k ∈𝒜^c)],
Γ_2^(k)=√(1/n_0)[(w_k_2^2 ∨ w_k_2) 𝕀(k ∈𝒜)+(β_k_2^2 ∨β_k_2) 𝕀(k ∈𝒜^c)],
where
Ω_k= √(s log p/n_0), k=0
√(s log p/n_k+n_0)+(log p/n_k+n_0)^1 / 4√(h)+√(s) h, k ∈𝒜
h^'√(log p/n_k+n_0)+√(s^'log p/n_k+n_0) W_k+(log p/n_k+n_0)^1 / 4√(h^' W_k), k ∈𝒜^c
and W_k=1 ∨β_k-β_0_2 ∨β_k- w_k_2.
Let CV_k^s=CV^s(β_k) and CV_0^s=CV^s(β_0) denote the s-th fold cross validation errors with β_k and β_0, respectively.
Note that they are averaged by the testing data size.
First, we have the following
P(𝒜≠𝒜)≤ P(⋃_k∈𝒜{CV_v(β_k,𝒮)< CV_v(β_0,𝒮)}⋃⋃_k∈𝒜^cCV_v(β_k,𝒮)≥ CV_v(β_0,𝒮))
≤ ∑_k∈𝒜P(CV_v(β_k,𝒮)< S+1/2)+∑_k∈𝒜^cP(CV_v(β_k,𝒮)≥S+1/2)
= ∑_k∈𝒜P(∑_s∈𝒮 I(CV_k^s≤ CV_0^s)<S+1/2)+∑_k∈𝒜^cP(∑_s∈𝒮 I(CV_k^s≤ CV_0^s)≥S+1/2)
≤ ∑_k∈𝒜{1-∑_s∈𝒮 P(CV_k^s≤ CV_0^s)/S}+∑_k∈𝒜^c2/S+1E(∑_s∈𝒮 I(CV_k^s≤ CV_0^s))
= 1/S∑_k∈𝒜∑_s∈𝒮{1- P(CV_k^s- CV_0^s≤ 0)}+2/S+1∑_k∈𝒜^c∑_s∈𝒮{1- P(CV_k^s- CV_0^s> 0)},
where the last inequality is due to Lemma <ref> and Markov's inequality.
Next, we need to prove, for any s,
* P(CV_k^s- CV_0^s≤ 0), k∈𝒜, holds with a high probability.
* P(CV_k^s- CV_0^s> 0), k∈𝒜^c, holds with a high probability.
Intuitively, for k∈𝒜, β_k is obtained with a larger sample size than β_0, so we expect a lower cross-validation error, i.e., CV_k^s- CV_0^s≤ 0.
For k∈𝒜^c, β_k is obtained with data deviating from the target data, so we expect a higher cross-validation error, i.e., CV_k^s- CV_0^s> 0.
For (i), using claims in the proof of Theorem 4 in <cit.>, we know
sup_s| CV_k^s-CV_0^s|≲ζ{Γ_1^(k)+Γ_2^(k)+h^2} 0,
for k∈𝒜∪{0},
holds with probability at least 1-2exp(-ζ^2).
For (ii), from the proof of Theorem 4 in <cit.>, we see
inf_s CV_k^s-CV_0^s>C_0(σ̂∨ 1)>0
holds with probability at least 1-2[exp(-C_0^-2)+exp(-ζ^2)]for C_0>0 and ζ>0.
Combining (i) and (ii), we have
P(𝒜≠𝒜) ≤1/S∑_k∈𝒜∑_s=1^S 2exp(-ζ^2)+2/S+1∑_k∈𝒜^c∑_s=1^S[2exp(-C_0^-2)+2exp(-ζ^2)]
=2|𝒜|exp(-ζ^2)+2S/S+1|𝒜^c|[2exp(-C_0^-2)+2exp(-ζ^2)].
For any δ>0, there exist C_0>0 and ζ>0 such that
2|𝒜|exp(-ζ^2)≤δ/2, 2S/S+1|𝒜^c|[2exp(-C_0^-2)+2exp(-ζ^2)]≤δ/2.
In summary, for any δ>0, there exist C_0(δ)>0 and ζ(δ)>0 such that
P(𝒜=𝒜)≥ 1-δ, as n_k→∞, k∈𝒜∪{0}.
Proof of Theorem <ref>
We decompose Û_n_k by
Û_n_k= 1/n_k∑_i ≠ i'^n_k{(y_i-μ_i)(y_i'-μ_i') x_ki^⊤ x_ki'}_I_Û_n_k+1/n_k∑_i ≠ i'^n_k{(μ_i-μ̂_∅ i)(μ_i'-μ̂_∅ i') x_ki^⊤ x_ki'}_II_Û_n_k
+ 2/n_k∑_i ≠ i'^n_k{(y_i-μ_i)(y_i'-μ̂_∅ i') x_ki^⊤ x_ki'}_III_Û_n_k.
Note that the size is proved under H_0,
We first exam II_Û_n_k: note that
II_Û_n_k/n_k=(β_0-β_0)^⊤[1/n_k∑_i=1^n_k z_i w_i^⊤][1/n_k∑_i=1^n_k z_i w_i^⊤](β_0-β_0)_II1-1/n_k^2∑_i=1^n_k (μ_i-μ̂_∅ i)^2 w_i^⊤ w_i_II2.
For II1, let Σ=1/n_k∑_i=1^n_k z_i w_i^⊤=1/n_k∑_i=1^n_k x_ki x_ki^⊤ and Σ=E( x_ki x_ki^⊤).
Then, it can be shown that
Σ-Σ_∞=τ=O_p(√(log p/n_k)).
Similar to A2 in <cit.>, we see that |II1|=O_p(β_0-β_0_2^2)=O_p(slog p/n), where n=n_0+n_k.
For II2, n_k II2≤μ-μ̂_∅_∞^21/n_k∑_i=1^n_k x_ki^⊤ x_ki=o_p(√(2Λ_W^ϵ)) .
Finally,
II_Û_n_k=n_k II1+n_k II2=O_p(n_k slog p/n)+o_p(√(2Λ_W^ϵ))=o_p(√(2Λ_W^ϵ))
when n_k slog p/n/√(2Λ_W^ϵ)=o(1).
We next exam III_Û_n_k: Similar to <cit.>, we obtain |III1|=O_p[1/√(n n_k)√(slog p)(2Λ_W^ϵ)^1/4] and n_k III2=o_p(√(2Λ_W^ϵ)).
Finally,
III_Û_n_k=n_k III1+n_k III2=O_p[√(n_k slog p/n)(2Λ_W^ϵ)^1/4]+o_p(√(2Λ_W^ϵ))=o_p(√(2Λ_W^ϵ))
when n_k slog p/n/√(2Λ_W^ϵ)=o(1).
Remaining steps are the same as <cit.>.
Proof of test power: Denote γ=β_0-β_0 and μ=E(Y|W,Z).
First for II_Û_n_k, note that n_k|II_1|=O_p(n_kγ_2^2) and n_k II_2≤μ-μ_∅_∞^21/n_k∑_i=1^n_k x_ki^⊤ x_ki.
To bound μ-μ_∅_∞^2, we notice that μ-μ_∅_∞^2≤ X_k(β_k-β_0)_∞^2+ X_k(β_0-β_0)_∞^2.
For any t>0 and positive constants D_5 and D_6, let ξ=D_5s^2log p/C_Σ^2n, then
P_1 =P( X_k(β_k-β_0)_∞>t)
=P( X_k(β_k-β_0)_∞>t, β_k-β_0_2^2≤ξ_n)+P( X_k(β_k-β_0)_∞>t, β_k-β_0_2^2>ξ_n)
≤ 2n_kexp(-t^2/2D_6^2σ^2β_k-β_0_2^2)
≤ 2n_kexp(-t^2C_Σ^2 n/2D_6^2σ^2D_5 s^2log p)
→ 0
where we use the fact that β_k-β_0_2^2=h_2^2≤ h^2≲ξ_n.
Thus, X_k(β_k-β_0)=o_p(1).
Similarly, let ξ_n=slog p/n, then
P_1 =P( X_kγ_∞>t)
=P( X_kγ_∞>t, γ_2^2≤ξ_n)+P( X_kγ_∞>t, γ_2^2>ξ_n)
≤ 2n_kexp(-t^2 n/2D_6^2σ^2slog p)
→ 0
where we use the fact that γ≲slog p/n+√(log p/n)C_Σh≲slog p/n <cit.>.
Thus, X_kγ=o_p(1).
Finally, μ-μ_∅_∞=o_p(1) and n_k II=o_p(√(2Λ_W^ϵ)), which implies II_Û_n_k=o_p(√(2Λ_W^ϵ)).
Similarly, we can also find III_Û_n_k=o_p(√(2Λ_W^ϵ)).
Using the notations of R̂_n_k in the proof of Theorem 1 in <cit.>, we can obtain II_R̂_n_k, III_R̂_n_k, IV_R̂_n_k, V_R̂_n_k, and VI_R̂_n_k are o_p(Λ_W^ϵ) as n_k→∞.
In summary, we conclude R̂_n_k/Λ_W^ϵp→1 as n_k→∞.
Finally, if n_kγ_2^2/√(2Λ_W^ϵ)=o(1), II_Û_n_k/2Λ_W^ϵ=III_Û_n_k/2Λ_W^ϵ=o(1), which implies the power is α; if √(n_k)γ_2/(2Λ_W^ϵ)^1/4→∞, II_Û_n_k/2Λ_W^ϵ=III_Û_n_k/2Λ_W^ϵ=o(1), which implies the power is 1 as n_k→∞.
▪
Proof of Theorem <ref>
Let E_k1 be the event of making the type I error and E_k2 be the event of making the type II error for X_k.
P(𝒜≠𝒜) ≤ P[(⋃_k∈𝒜E_k1)⋃(⋃_k∈𝒜^cE_k2)]
≤∑_k∈𝒜P(E_k1)+∑_k∈𝒜^cP(E_k2)
≤ |𝒜|α+|𝒜^c| max_k∈𝒜^c P(E_k2)
= Kα+K α_2
where α_2=max_k∈𝒜^c P(E_k2).
To have P(𝒜≠𝒜)<δ, we would need Kα<δ/2 and Kα_2<δ/2, where α_2∈[0,1-α].
▪
§.§ Appendix B: Additional simulation results
Setting 2: we consider design matrices of the source data from other distributions.
We fix the number of source data to K=10.
The sample sizes of the target data and all source data are 100.
The dimensionality p=500.
For the target data, x_0 and β_0 are simulated the same as those in subsection <ref>.
For the source data, β_k is simulated the same as the one in subsection <ref>, but the i-th data point in the k-th source data x_ki is simulated differently.
Specifically, we simulate the data points in the source data with three different distributions:
* x_ki∼𝒩( 0, Σ) with Σ_jj=1 and Σ_jj'=0.5.
* x_ki follows a t-distribution with degrees of freedom 4.
* Data points in each source data are simulated from 0.5 𝒩(2√(5)/5, 1/5)+0.5 𝒩(-2√(5)/5, 1/5), a bimodal Gaussian mixture model.
Note that the mixture model has mean 0 and variance 1.
Note that the normality assumptions for the source data do not hold in the second and third cases.
Thus, these two cases are to exam the performance of UTrans under the nonnormal designs.
Row A shows the results of the simulated data under a normal distribution with the covariance structure of compound symmetry.
Compared to the existing 𝒜-Trans-GLM, our proposed 𝒜-UTrans algorithms attain the lowest prediction errors with various h.
Particularly, 𝒜-UTrans-SCAD keeps the lowest errors all the time.
Row B presents the results when the source data are from the t-distribution and Row C illustrates the results when the source data are from a Gaussian mixture model.
𝒜-UTrans-Lasso performs similarly to 𝒜-Trans-GLM while 𝒜-UTrans-SCAD outperforms the others.
We observe that the prediction errors of 𝒜-UTrans-SCAD are always the lowest among the others.
In summary, Figure <ref> demonstrates that our proposed method, particularly 𝒜-UTrans-SCAD, outperforms the others when the data are simulated with more complicated structures.
Therefore, our method is more robust to the distributions of source data.
The ℓ_2 estimation errors in these setting show similar patterns as Figure <ref>.
|
http://arxiv.org/abs/2307.01143v1
|
20230703163346
|
Constructing Compacta from Posets
|
[
"Adam Bartoš",
"Tristan Bice",
"Alessandro Vignati"
] |
math.GN
|
[
"math.GN",
"math.CT",
"06A07, 54D70, 54D80, 54E45, 54H10"
] |
We develop a simple method of constructing topological spaces from countable posets with finite levels, one which applies to all second countable 𝖳_1 compacta. This results in a duality amenable to building such spaces from finite building blocks, essentially an abstract analog of classical constructions defining compacta from progressively finer open covers.
Spin-momentum locking breakdown on plasmonic metasurfaces
Luis Martín-Moreno
August 1, 2023
=========================================================
§ INTRODUCTION
§.§ Background
Connections between topology and order theory have been central to a large body mathematical research over the past century. The idea behind much of this is to study abstract order structures like Boolean algebras, distributive lattices and semilattices etc. by representing them as families of subsets of topological spaces. Stone was the first to initiate this line of research with his classic dualities in the 30's (see <cit.> and <cit.>) which have since been reformulated and extended in various ways by people such as Priestley <cit.>, Grätzer <cit.> and Celani–Gonzalez <cit.>, just to name a few. However, the spaces involved in these dualities typically have many compact open sets, which makes them quite different from the connected spaces more commonly considered in analysis.
In the opposite direction, other work has been motivated by the idea that topological spaces, particularly compacta, can be analysed from a more order theoretic perspective via lattices and semilattices consisting of open sets. This line of research was initiated by Wallman <cit.> and continued in various forms by people such as Shirota <cit.>, de Vries <cit.>, Hofmann–Lawson <cit.> and Jung–Sünderhauf <cit.>, with recent efforts to unify and extend these results also appearing in <cit.>, <cit.>, <cit.>, <cit.> and <cit.>. In contrast to the work above, these dualites do encompass connected spaces. However, so far they have not found many applications in actually building such spaces, like those considered in continuum theory.
One reason for this is that the order structures involved in these dualities are not so easily built from finite substructures. In contrast, classical constructions of continua often proceed by building them up from finitary approximations, e.g. coming from simplicial complexes or finite open covers. For example, the famous pseudoarc (see <cit.> and <cit.>) is usually built from successively finer chains of open subsets in ℝ^2, each chain being `crooked' in the previous chain. Our work stems from the simple observation that the ambient space ℝ^2 here is essentially irrelevant, what really matters is just the poset arising from the inclusion relation between the links in the chains. More precisely, the covers of the space are completely determined by the levels of the poset and these, in turn, determine the points of the space. Indeed, points can be identified with their neighbourhood filters, which are nothing more than subsets of the poset `selecting' at least one element from each cover.
This leads us to consider a general class of posets formed from sequences of finite levels. From any such poset, we construct a space of selectors, resulting in a 𝖳_1 compactum on which the levels of the poset get represented as open covers. Moreover, we will we see that all second countable 𝖳_1 compacta arise in this way. Thus, at least in theory, it should be possible to construct any such space from a sequence of finite sets defining the levels of such a poset. We further show that continuous functions between the resulting spaces can be completely described by certain relations between the original posets. In this way we obtain a duality of a somewhat different flavour to those described above, one which has more potential applications to building spaces like the pseudoarc from finitary approximations.
§.§ Outline
To motivate our construction we first embark on a detailed analysis of bases of 𝖳_1 compacta and the posets they form (when ordered by the usual inclusion relation ⊆). In particular, we examine special subsets of a poset as analogs of open covers, namely bands and more general caps. On the one hand, caps are always covers, by <ref>. Conversely, it is always possible to choose a basis of any second countable 𝖳_1 compactum so that covers are caps. We can also ensure that the basis forms an ω-poset where ranks and levels are always well-defined and finite. Further order-topological properties of the resulting ω-cap-bases are also explored in <ref>, e.g. showing how they are simply characterised in metric compacta as the bases whose diameters converge to zero (see <ref>).
In <ref>, we show how to reverse this process, representing any ω-poset ℙ as an ω-cap-basis ℙ_𝖲 of a suitably defined 𝖳_1 compactum, namely its spectrum 𝖲ℙ. Topological properties of 𝖲ℙ are thus determined by the order structure of ℙ. Most notably, 𝖲ℙ is Hausdorff precisely when ℙ is regular, as shown in <ref>. Subcompacta and subcontinua of 𝖲ℙ are also determined by special subsets of ℙ, as we show in <ref> and <ref>. With an eye to our primary motivating example of the pseudoarc, we even show how to characterise hereditary indecomposability of 𝖲ℙ via tangled refinements in ℙ which, modulo regularity, generalise the original crooked refinements of Bing.
Finally, in <ref>, we show how to encode continuous maps between spectra by certain relations between the posets we call refiners. A single continuous map can come from various different refiners and this flexibility yields homeomorphisms between spectra under some fairly general conditions explored in <ref>. To obtain a more precise equivalence of categories, we turn our attention to strong refiners in <ref> under an appropriate star-composition, thus yielding a combinatorial equivalent 𝐒 of the category 𝐊 of metrisable compacta.
§.§ Future Work
Naturally, the next step would be to construct the posets themselves (as well as the refiners between them) in a more combinatorial way. The basic idea would be to consider categories of finite graphs, much like in the work of Irwin–Solecki <cit.> and Debski–Tymchatyn <cit.>, except with more general relational morphisms. Sequences of such relations determine the levels of a graded ω-poset, which then yield 𝖳_1 compacta from the work presented here. In particular, Fraïssé sequences in appropriate categories should yield canonical constructions of well-known compacta like the pseudoarc and Lelek fan. Classical properties of these spaces relating to uniqueness and homogeneity could then be derived in a more canonical Fraïssé theoretic way, as we hope to demonstrate in future work.
§ BASES AS POSETS
Here we analyse bases of topological spaces, viewed as posets ordered by inclusion. In particular, we explore how to characterise covers order theoretically and how to construct well-behaved bases satisfying certain order theoretic properties.
§.§ Preliminaries
We begin with some basic terminology and notation. We view any ⊏ ⊆ A× B as a relation `from B to A'.
We call ⊏
* a function if every b∈ B is related to exactly one a∈ A.
* surjective if every a∈ A is related to at least one b∈ B.
* injective if, for every b∈ B, we have some a∈ A which is only related to b.
These notions of surjectivity and injectivity for relations generalise the usual notions for functions. The prefix `co' will be used to refer to the opposite/inverse relation ⊏^-1 = ⊐ ⊆ B× A (where b⊐ a means a⊏ b), e.g. we say ⊐ is co-injective to mean that ⊏ is injective. For example, one can note that every co-injective relation is automatically surjective, and the converse also holds for functions.
While this version of injectivity for relations may not be the most obvious generalisation from functions, it is the one we need for our work, being closely related to minimal covers – see <ref> below. It is also natural from a categorical point of view, as the monic morphisms in the category of relations between sets are exactly those that are injective in this sense. It also corresponds to injectivity of the image map C↦ C^⊐ on subsets C⊆ B defined below, i.e. ⊏ is injective precisely when C^⊐=D^⊐ implies C=D, for all C,D⊆ B.
The motivating situation we have in mind is where ⊏ is the inclusion relation ⊆ between covers A and B of a set X. In this case, ⊏ is surjective precisely when A refines B in the usual sense (we will also generalise refinement soon below). If B is even a minimal cover, then ⊏ will also be injective, as we now show.
Let us denote the power set of any set X by
𝖯X={A:A⊆ X}.
To say A⊆𝖯X covers X of course means X=⋃ A.
If A,B⊆𝖯X cover X and ⊏ = ⊆ on A× B then
B is minimal and ⊏ is surjective ⇒ ⊏ is injective.
If B is a minimal cover of X then every b∈ B must contain some x∈ X which is not in any other element of B, i.e. x∈ b∖⋃(B∖{b}). If A also covers X then we must have some a∈ A containing x. If ⊏ is also surjective then we have some c∈ B with x∈ a⊆ c and hence c=b. This shows that a is only related to b, which in turn shows that ⊏ is injective.
Again take a relation ⊏ ⊆ A× B. The preimage of any S⊆ A is given by
PreimageS^⊏=⊐[S]={b∈ B:∃ s∈ S (s⊏ b)}.
Likewise, the image of any T⊆ B is the preimage of the opposite relation ⊐, i.e.ImageT^⊐=⊏[T]={a∈ A:∃ t∈ T (a⊏ t)}.
We say S⊆ A refines T⊆ B if it is contained in its image, i.e. S⊆ T^⊐. Equivalently, S refines T when the restriction of ⊏ to S× T is surjective. The resulting refinement relation will also be denoted by ⊏, i.e. for any S⊆ A and T⊆ B,
S⊏ T ⇔ S⊆ T^⊐ ⇔ ∀ s∈ S ∃ t∈ T (s⊏ t).
Likewise, the corefinement relation will also be denoted by ⊐, i.e.
T⊐ S ⇔ T⊆ S^⊏ ⇔ ∀ t∈ T ∃ s∈ S (s⊏ t).
Here again the motivating situation we have in mind is when ⊏ is the inclusion relation or, more generally, some partial order or even preorder (recall that a preorder is reflexive transitive relation, while a partial order is an antisymmetric preorder.)
Given a preorder ≤ on a set ℙ, we say p,q∈ℙ are comparable if p is below q or vice versa. Put another way, the comparability relation is the symmetrisation of the preorder relation ≤. We denote the comparability relation by ≶, i.e.
p≶ q ⇔ p≤ q or q≤ p.
The antichains[Note that these are more general than the strong antichains usually considered by set theorists (which are defined to be subsets A of ℙ in which no pair in A has a common lower bound in ℙ).] of ℙ are the pairwise incomparable subsets, which we denote by
𝖠ℙ={A⊆ℙ:∀ distinct q,r∈ A (q≸r)}.
If ≤ is a preorder on ℙ then so is the refinement relation on 𝖯ℙ. If ≤ is a partial order then so is the refinement relation when restricted to 𝖠ℙ.
If ≤ is reflexive on ℙ and Q⊆ℙ then q≤ q, for all q∈ Q, showing that Q≤ Q, i.e. ≤ is also reflexive on 𝖯ℙ. On the other hand, if Q≤ R≤ S then, for any q∈ Q, we have r∈ R with q≤ r, which in turn yields s∈ S with r≤ s. If ≤ is transitive on ℙ then q≤ s, showing that Q≤ S, i.e. ≤ is also transitive on 𝖯ℙ.
Finally, say ≤ is also antisymmetric on ℙ and Q≤ R≤ Q, for some antichains Q,R∈𝖠ℙ. For all q∈ Q, we thus have r∈ R with q≤ r, which in turn yields q'∈ Q with q≤ r≤ q'. Thus q=q', as Q is an antichain, and hence q=r, as ≤ is antisymmetric on ℙ. Thus shows that Q⊆ R, while R⊆ Q follows dually.
We will also need to compose relations, which we do in the usual way, i.e. if ⊏ ⊆ A× B and ⊆ B× C then ⊏∘ ⊆ A× C is defined by
Compositiona⊏∘ c ⇔ ∃ b∈ B (a⊏ b c).
Note this is consistent with the usual composition of functions as we are taking the domain of a function to correspond to the right coordinate not the left, i.e. a function f:B→ A from B to A is a subset of A× B (not B× A).
As in <cit.> (see also <cit.>), we say that B consolidates A, written A◂ B, when A refines B and, moreover, every b∈ B is a union of elements of A, i.e.
ConsolidationA◂ B ⇔ b=⋃(b^⊇∩ A)=⋃{a∈ A:a⊆ b}.
Take A,B,C⊆𝖯X with ⊏ ⊆ A× B and ⊆ B× C defined to be restrictions of the inclusion relation ⊆ on 𝖯X. For any a∈ A and c∈ C,
a⊏∘ c ⇒ a⊆ c.
The converse also holds when A is a minimal cover and A◂ B◂ C.
Certainly a⊆ b⊆ c implies a⊆ c. Conversely, say a⊆ c and A is a minimal cover so we have x∈ a∖⋃(A∖{a}). If c=⋃ c^ then we have b∈ B with x∈ b⊆ c. If b=⋃ b^⊐ too then we have a'∈ A with x∈ a'⊆ b and hence a=a', i.e. a⊆ b⊆ c and hence a⊏∘ c.
§.§ Bands and Caps
Let us denote the finite subsets of a set X by
𝖥X={F⊆ X:|F|<∞}.
The following special subsets of our poset ℙ form the key order theoretic analogs of open covers that are fundamental to our work.
Take a poset (ℙ,≤).
* We call B∈𝖥ℙ a band if each p∈ℙ is comparable to some b∈ B.
* We call C∈𝖯ℙ a cap if C is refined by some band.
There is also the related notion of a cutset from <cit.>, which is a subset C of ℙ overlapping every maximal chain in ℙ. Put another way, these are precisely the transversals of maximal cliques of the comparability graph (ℙ,≶), as studied in <cit.>. Similarly, bands are the finite dominating subsets of the comparability graph. By Kuratowski–Zorn, every element of a poset is contained in a maximal chain and hence every finite cutset is a band. However, the converse can fail, e.g. if ℙ={a,b,c,d} with < ={(a,c),(b,c),(b,d)} then {a,d} is a band but not a cutset, as it fails to overlap the maximal chain {b,c}. Although in graded ω-posets, every level is a cutset and so in this case every band and hence every cap is at least refined by a finite cutset, thanks to <ref> below.
We denote the bands and caps of ℙ by
Bands𝖡ℙ ={B∈𝖥ℙ:ℙ=B^≶}.
Caps𝖢ℙ ={C∈𝖯ℙ:∃ B∈𝖡ℙ (B≤ C)}.
The primary example we have in mind is when ℙ is a basis of some topological space X ordered by inclusion ⊆. In this case, caps are meant to correspond to covers of the space X. More precisely, we have the following.
If ℙ is a basis of non-empty open sets of some 𝖳_1 topological space X ordered by inclusion (i.e. ≤ = ⊆) then every cap covers X, i.e.
C∈𝖢ℙ ⇒ ⋃ C=X.
Note that if B refines C and ⋃ B=X then ⋃ C=X. Thus it is enough to show that ⋃ B=X whenever B is a band. Take a band B and suppose that we have x∈ X∖⋃ B. For each b∈ B, let x_b be a point in b. As X is 𝖳_1, c=X∖{x_b:b∈ B} is an open set containing x. As ℙ is a basis, there is d∈ℙ such that x∈ d⊆ c. For each b∈ B, note x_b∈ b∖ d so b⊈ d while x∈ d∖ b so d⊈ b. This shows that B is not a band, a contradiction.
The converse of (<ref>), however, can fail. We can even show that there is no way to identify the covers of a space purely from the inclusion order on an arbitrary basis. Indeed, in the following two examples we have bases of different compact Hausdorff spaces which are isomorphic as posets but have different covers. Specifically, the bases are both isomorphic to the unique countable atomless pseudo-Boolean algebra (the atoms of a poset ℙ are its minimal elements and ℙ is atomless and if it has no atoms, while a pseudo-Boolean algebra is a poset ℙ formed from a Boolean algebra 𝔹 minus its bottom element 0, i.e. ℙ=𝔹∖{0}).
The interval X=[0,1] in its usual topology has a basis ℙ consisting of non-empty regular open sets which are unions of finitely many intervals with rational endpoints (note regularity disqualifies sets like (0,1/2) and (1/4,1/2)∪(1/2,3/4), only the interior of their closures [0,1/2) and (1/4,3/4) lie in ℙ). One immediately sees that ℙ is then a countable atomless pseudo-Boolean algebra with respect to the inclusion ordering. We also see that p,q∈ℙ are disjoint precisely when they have no lower bound in ℙ, and no such p and q cover X.
The Cantor space X={0,1}^ω has a basis ℙ consisting of all non-empty clopen sets. Again ℙ is a countable atomless pseudo-Boolean algebra and p,q∈ℙ are disjoint precisely when they have no lower bound in ℙ. However, this time there are many disjoint p,q∈ℙ that cover X.
In fact, if ℙ is the countable atomless pseudo-Boolean algebra then its bands and caps are all trivial in that they must contain the top element. This poset does, however, have a subposet isomorphic to the full countable binary tree 2^<ω, which is still isomorphic to a basis of the Cantor space (but not the unit interval anymore). In this case, caps of 2^<ω do indeed correctly identify the covers of the Cantor space. This suggests that we might be able to ensure covers of other spaces are also caps by choosing the basis more carefully. In other words, we might be able to find `cap-bases' or even `band-bases' in the following sense.
We call a basis ℙ of a topological space X a
* band-basis if 𝖡ℙ={B∈𝖥ℙ:X=⋃ B}.
* cap-basis if 𝖢ℙ={C∈𝖯ℙ:X=⋃ C}.
Note every element of a cap-basis or band-basis ℙ of a non-empty space X must also be non-empty – otherwise ∅ would be a minimum of ℙ and hence a band of ℙ which does not cover X, contradicting the definition. Further observe that, as every cap contains a finite subcap, every space with a cap-basis is automatically compact. And every band-basis of a compact space is a cap-basis, as every cover has a finite subcover which is then a band and hence a cap. Also, to verify that a basis of non-empty open sets of a 𝖳_1 space is a band/cap-basis, it suffices to show that covers are bands/caps, as the converse follows from (<ref>).
Every second countable compact 𝖳_1 space has a cap-basis.
To start with, take any countable basis B of a compact 𝖳_1 space X and let (C_n)_n∈ω enumerate all finite minimal covers of X from B. Recursively define (n_k)_k∈ω as follows. Let n_0 be arbitrary. If n_k has been defined then note that, for any x∈ X, we have p∈ C_n_k and q∈ C_k with x∈ p∩ q. As B is a basis, we thus have b∈ B with x∈ b⊆ p∩ q. By compactness, X has a finite minimal cover of such b's. This means we have n_k+1∈ω such that C_n_k+1 refines both C_n_k and C_k.
Set B_k=C_n_k and ℙ=⋃_k∈ωB_k. First note that ℙ is still a basis for X. Indeed, if x∈ b∈ B then, as X is 𝖳_1, we can cover X∖ b with elements of B avoiding x. Compactness then yields a finite minimal subcover, i.e. we have some k∈ω with b∈ C_k and x∉⋃(C_k∖{b}). Taking c∈ B_k+1 with x∈ c, it follows that c⊆ b, as B_k+1 refines C_k and b is the only element of C_k containing x. In particular, we have found c∈ℙ with x∈ c⊆ b, showing that ℙ is a basis for X.
By definition, B_k+1 refines B_k. We claim B_k also corefines B_k+1, i.e. B_k⊆ B_k+1^⊆. Indeed, as B_k is a minimal cover, for every p∈ B_k, we have x∈ p∖⋃(B_k∖{p}). Taking q∈ B_k+1 with x∈ q, we see that q⊆ p, as B_k+1 refines B_k and no other element of B_k contains x. This proves the claim and hence each B_k is a band of ℙ. As every cover of X from B (and, in particular, ℙ) is refined by some B_k, it follows that every cover of X from ℙ is a cap of ℙ, i.e. ℙ is a cap-basis.
Note the cap-bases in the above proof are Noetherian, which have also been studied independently (see e.g. <cit.>). In general, we call a poset ℙ Noetherian if every subset of ℙ has a maximal element or, equivalently, if ℙ has no strictly increasing sequences. Put another way, this is saying that > (where a>b means a≥ b≠ a) is well-founded in the sense of <cit.>. Like in <cit.>, we then recursively define the rank 𝗋(p) of any p∈ℙ as the ordinal given by
𝗋(p)=sup_q>p(𝗋(q)+1).
So maximal elements of ℙ have rank 0, maximal elements among the remaining subset have rank 1, etc.. For any ordinal α, we denote the α^th cone of ℙ by
ℙ^α={p∈ℙ:𝗋(p)≤α}.
The atoms of α^th cone form the α^th level of ℙ, denoted by
ℙ_α={p∈ℙ^α:p^>∩ℙ^α=∅}
Note 𝗋^-1{α}⊆ℙ_α, i.e. the α^th level contains all elements of rank α. But this inclusion can be strict, i.e. the α^th level can also contain elements of smaller rank (e.g. the α^th level always contains all atoms of ℙ of rank smaller than α).
If a level of a Noetherian poset has finitely many elements then it is immediately seen to be a band. Each finite level of a Noetherian basis of non-empty sets of a 𝖳_1 space must then cover the space, by <ref>. In fact, even infinite levels of finite depth must also be covers.
If ℙ is a Noetherian basis for a 𝖳_1 space X then each (possibly infinite) level of finite depth covers the space, i.e. X=⋃ℙ_n, for all n∈ω.
Every x∈ X lies in some p∈ℙ, as ℙ is a basis. As ℙ is Noetherian, we then have p_0∈ℙ_0 with p⊆ p_0 and hence x∈ p_0 too. This shows that ℙ_0 covers X. Now say ℙ_n covers X. This means any x∈ X lies in some p∈ℙ_n. If p={x} then p is an atom of ℙ and hence p∈ℙ_n+1 too. Otherwise, we have y∈ p∖{x} and hence p∖{y} is an open neighbourhood of x, as X is 𝖳_1. Then we have q∈ℙ with x∈ q⊆ p∖{y}, as ℙ is a basis, necessarily with 𝗋(q)>𝗋(p)=n. Thus we have r∈ℙ_n+1 with x∈ q⊆ r, showing that ℙ_n+1 also covers X. By induction, ℙ_n thus covers X, for all n∈ω.
§.§ Omega-Posets
We call a poset ℙ an ω-poset if every principal filter p^≤ is finite and the number of principal filters of size n is also finite, for any n∈ω. Equivalently, an ω-poset is a Noetherian poset in which both the rank of each element of ℙ and the size of each level (or cone) of ℙ is finite. For example, taking any ω-tree in the sense of <cit.> and replacing ≤ with ≥ yields an ω-poset.
The nice thing about ω-posets is that their levels determine the caps, specifically caps are precisely the subsets refined by some level. Put another way, the levels are coinitial with respect to refinement within the family of all caps (and even bands).
If ℙ is an ω-poset then its levels (ℙ_n) are coinitial in 𝖡ℙ.
First note that each level ℙ_n is a band. Indeed, if 𝗋(p)≤ n then p must be above some minimal element of ℙ^n, i.e. some element of ℙ_n. On the other hand, if 𝗋(p)≥ n then p is below some element of rank n, which must again lie in ℙ_n.
Conversely, say B⊆ℙ is a band and let n=max_b∈ B𝗋(b) so B⊆ℙ^n. It follows that no atom of ℙ^n can be strictly above any element of B. Thus every element of ℙ_n must be below some element of B, as B is a band, i.e. ℙ_n≤ B.
In particular, the bands and caps of any ω-poset are (downwards) directed with respect to refinement. The fact that the levels here are finite is crucial, i.e. there are simple examples of Noetherian posets for which this fails.
Take a poset ℙ consisting of two incomparable q,r∈ℙ together with infinitely many incomparable elements which all lie below both q and r, i.e. ℙ∖{q,r}=q^>=r^> is infinite and s≰ t, for all distinct s,t∈ℙ∖{q,r}. This poset is Noetherian with two levels, although only the top level is finite. Note {q,r} is a band of ℙ but the only other bands of ℙ contain at least one element of both {q,r} and ℙ∖{q,r}, while the caps of ℙ are precisely those subsets containing q and/or r. In particular, no cap refines both the singleton caps {q} and {r}.
Another simple observation about caps in ω-posets is the following.
If ℙ is an ω-poset then no infinite cap is an antichain, i.e.
𝖠ℙ∩𝖢ℙ⊆𝖥ℙ.
If C is an infinite cap then B≤ C, for some band B. In particular, B is finite so we must have c∈ C with 𝗋(c)>max_b∈ B𝗋(b). As B is a band, we then have b∈ B∩ c^<. As B≤ C, we then have c'∈ C with c<b≤ c', showing that C is not an antichain.
We are particularly interested in ω-posets arising from bases.
A (band/cap-)basis that is also an ω-poset (w.r.t. inclusion ⊆) will be called an ω-(band/cap-)basis.
The proof of <ref> shows that every second countable 𝖳_1 compactum has an ω-cap-basis. Further note that if the space there is Hausdorff then it is metrisable. In compact metric spaces, we can actually characterise ω-cap-bases as precisely the countable bases whose diameters converge to zero.
If X is a compact metric space with a countable basis ℙ of non-empty open sets then, for any enumeration (p_n) of ℙ,
ℙ is an ω-cap-basis ⇔ diam(p_n)→0.
For ε∈(0,1), let ℙ_ε={p∈ℙ:diam(p)<ε}, so what we want to show is
ℙ is an ω-cap-basis ⇔ ℙ∖ℙ_ε is finite, for all ε>0.
First say ℙ∖ℙ_ε is infinite, for some ε>0. Assuming ℙ is an ω-poset (otherwise we are already done), this means that every level of ℙ contains a set with diameter at least ε. By <ref>, the same is true of all caps. This means that the cover ℙ_ε of X can not be a cap and hence ℙ is not a cap-basis.
Conversely, say ℙ∖ℙ_ε is finite, for all ε>0. As p⊆ q implies diam(p)≤diam(q), ℙ is Noetherian and the rank of each element is finite.
We claim that every level ℙ_n of ℙ covers X. To see this, take any x∈ X. If x is not isolated then we must have some sequence in ℙ containing x which is strictly decreasing with respect to inclusion. There are then sets in ℙ of arbitrary rank containing x, in particular we have some p_x∈ℙ with x∈ p_x and 𝗋(p_x)=n and hence p_x∈ℙ_n. On the other hand, if x is isolated then either {x}∈ℙ^n and hence we may take p_x={x}∈ℙ_n, or 𝗋({x})>n and hence we again have p_x∈ℙ with x∈ p_x and 𝗋(p_x)=n so p_x∈ℙ_n. Then {p_x:x∈ X}⊆ℙ_n covers X, as claimed.
If ℙ were not an ω-poset then ℙ would have some infinite level ℙ_n. By the claim just proved, ℙ_n would then cover X and hence have some finite subcover F⊆ℙ_n. By the Lebesgue number lemma (see <cit.>), any cover of a compact metric space is uniform, i.e. we have some ε>0 such that every subset of diameter at most ε is contained in some set in the cover. In particular, we have some ε>0 such that ℙ_ε refines F. As ℙ_n is infinite, we can take some p∈ℙ_n∖ F with diam(p)<ε. But then p⊆ f, for some f∈ F, contradicting the fact that elements in the same level are incomparable. Thus ℙ is indeed an ω-poset.
For any ε>0, we next claim that ℙ_ε contains a band. To see this first note that ℙ_ε is still a basis for X. In particular, ℙ_ε covers X and hence we have a finite subcover F⊆ℙ_ε. Again, we have some δ>0 such that ℙ_δ refines F, i.e. ℙ_δ≤ F. As ℙ∖ℙ_δ is finite, we also have finite E⊆ℙ_ε with ℙ∖ℙ_δ≥ E. Thus E∪ F is a band of ℙ contained in ℙ_ε, proving the claim.
Now take any cover C⊆ℙ of X. Again C is uniform and is thus refined by ℙ_ε, for some ε>0, and hence by some band B⊆ℙ_ε, i.e. C is a cap. Conversely, caps are covers, by (<ref>), so ℙ is indeed a cap-basis.
Note that if U is an up-set of an ω-poset ℙ, i.e. U^≤⊆ U, then U is again an ω-poset in the induced ordering ≤_U = ≤∩ (U× U). Indeed, U being an up-set implies that the rank within U of any element of U is the same its rank within the original ω-poset ℙ. As long as U does not contain any extra atoms, the caps of U will also all come from caps of ℙ in a canonical way.
If ℙ is an ω-poset then, for all U⊆ℙ,
𝖢U⊆{C∩ U:C∈𝖢ℙ}.
Moreover, equality holds if U is an up-set whose atoms are all already atoms in ℙ.
First note that, for any finite F⊆ℙ, we can find a level ℙ_n whose overlap with F consists entirely of atoms. Indeed, as F is finite, we can find a cone ℙ^n overlapping f^>, for each f∈ F that is not an atom. This means non-atomic elements of F are never minimal in ℙ^n and hence ℙ_n is the required level. In particular, if F contains no atoms at all then it is disjoint from ℙ_n.
Now take any C∈𝖢U, which is refined by some B∈𝖡U. We thus have a level ℙ_n disjoint from B^<. As B is a band of U, for any u∈ℙ_n∩ U, we have some comparable b∈ B. As u∉ B^<, it follows that u≤ b. This shows that ℙ_n∩ U refines B and hence C. Thus ℙ_n refines C∪(ℙ_n∖ U), which is thus a cap of ℙ whose intersection with U is the original C. This proves (<ref>).
Conversely, take C∈𝖢ℙ, which is refined by some B∈𝖡ℙ. If U is an up-set and hence an ω-poset in its own right then have some level U_n of U such that B^<∩ U_n consists entirely of atoms of U. However, B^< does not contain any atoms of ℙ. If all atoms of U are already atoms of ℙ, this implies B^<∩ U_n=∅. As B is a band of ℙ, for each u∈ U_n, we have some comparable b∈ B. As u∉ B^<, this means u≤ b and hence u≤ c, for some c∈ C, which is necessarily also in U. This shows that U_n refines C∩ U, which is thus a cap of U, i.e. {C∩ U:C∈𝖢ℙ}⊆𝖢U.
The following result and its corollary show how to identify levels of an ω-poset.
The levels of a Noetherian poset ℙ in which each element has finite rank are the unique antichains (A_n)⊆𝖠ℙ covering ℙ such that, for all n∈ω, A_n+1∖ A_n refines A_n∖ A_n-1 (taking A_-1=∅) and A_n corefines A_n+1.
If A_n+1∖ A_n refines A_n∖ A_n-1 then, in particular, A_n+1 refines A_n and hence A_m refines A_n, for all m≥ n. From this we can already show that A_n⊆ℙ^n, for all n∈ω. Indeed, this follows immediately from the fact that
A_m∋ p<q∈ A_n ⇒ m>n.
To see this, just note if A_m∋ p<q∈ A_n then m≤ n would imply A_n≤ A_m and hence we would have p'∈ A_m with p<q≤ p', contradicting A_m∈𝖠ℙ.
Returning to the fact that A_n+1∖ A_n refines A_n∖ A_n-1, it now follows by induction that A_n∖ A_n-1⊆𝗋^-1{n}, for all n∈ω. Indeed, A_0⊆ℙ^0=𝗋^-1{0} is immediate from what we just showed. And if every p∈ A_n+1∖ A_n is (strictly) below some q∈ A_n∖ A_n-1⊆𝗋^-1{n} then n+1≥𝗋(p)>𝗋(q)=n and hence 𝗋(p)=n+1, showing that A_n+1∖ A_n⊆𝗋^-1{n+1}. If the (A_n) cover ℙ then so do the sets (A_n∖ A_n-1) and hence the inclusion must actually be an equality, i.e. for all n∈ω,
𝗋^-1{n}=A_n∖ A_n-1.
For all n∈ω, it follows that ℙ^n=⋃_k≤ nA_k and hence A_n⊆ℙ_n, by (<ref>).
If A_n also corefines A_n+1, for all n∈ω, then it again follows by induction that A_n=ℙ_n. Indeed, we already know ℙ_0=ℙ^0=A_0. And if ℙ^n≥ A_n≥ A_n+1 then A_n+1 must contain all atoms of ℙ^n∪ A_n+1=ℙ^n+1, i.e. ℙ_n+1⊆ A_n+1⊆ℙ_n+1.
Let us call an ω-poset ℙ weakly graded if consecutive levels share only atoms of ℙ. Equivalently, this is saying that every non-atomic p∈ℙ has a lower bound q with 𝗋(q)=𝗋(p)+1. Moreover, we immediately see that the following are equivalent.
* ℙ is atomless and weakly graded.
* Every p∈ℙ has a lower bound q with 𝗋(q)=𝗋(p)+1.
* The levels of ℙ are disjoint.
* ℙ_n=𝗋^-1{n}, for all n∈ω.
<ref> has the following corollary for weakly graded ω-posets.
If ℙ is a poset covered by finite antichains (A_n)_n∈ω⊆𝖠ℙ such that A_n+1 refines A_n, A_n corefines A_n+1 and A_n∩ A_n+1 contains only atoms of ℙ, for all n∈ω, then ℙ is a weakly graded ω-poset with levels ℙ_n=A_n, for all n∈ω.
As above, we obtain (<ref>) from the fact that each A_n+1 refines A_n, showing that ℙ is a Noetherian poset in which each element has finite rank. To show that ℙ_n=A_n and hence that ℙ is an ω-poset, it thus suffices to show that A_n+1∖ A_n refines A_n∖ A_n-1, for all n∈ω. But if A_n+1 refines A_n then, in particular, for every p∈ A_n+1∖ A_n, we have some q∈ A_n with p<q. If A_n∩ A_n-1 contains only atoms of ℙ then this implies that q∈ A_n∖ A_n-1 so we are done.
§.§ Level-Injectivity
Here we look at order properties related to minimal caps. First let us denote the order relation between levels m and n of an ω-poset ℙ by
≤^m_n=≤∩(ℙ_n×ℙ_m).
For any ω-poset ℙ, the following are equivalent.
* Each level ℙ_n is a minimal cap.
* ≤^m_n is injective whenever m≤ n.
* {n:≤^m_n is injective} is cofinal in ω, for each m∈ω.
<ref>⇒<ref> If ≤^m_n fails to be injective for some m≤ n then we have some p∈ℙ_m such that q^≤∩ℙ_m≠{p}, for all q∈ℙ_n. But then ℙ_n refines ℙ_m∖{p} and hence ℙ_m∖{p} is a cap, showing that ℙ_m is not a minimal cap.
<ref>⇒<ref> Immediate.
<ref>⇒<ref> If ℙ_m is not a minimal cap then it has a proper subcap C⊆ℙ_m, which is necessarily refined by ℙ_n, for some n≥ m, by <ref>. But then ℙ_k≤ C and hence ≤^m_k is not injective, for all k≥ n, showing that {n:≤^m_n is injective} is not cofinal in ω.
Accordingly, let us call an ω-poset ℙ satisfying any/all of the above conditions level-injective. When ℙ is atomless, we could also replace ≤^m_n above with <^m_n=≤∩(ℙ_n×ℙ_m), when m<n, which is a consequence of the following.
Every level-injective ω-poset ℙ is weakly graded.
If ℙ is an ω-poset that is not weakly graded then we have some p∈ℙ_n, where n>𝗋(p). Choosing n maximal with this property, it follows that p∉ℙ_n+1, even though all the lower bounds of p in ℙ_n+1 have rank n+1 and are thus below some element of rank n, necessarily different from p. Thus ℙ_n+1 refines ℙ_n∖{p}, showing that ℙ_n is not a minimal cap and hence ℙ is not level-injective.
With a little extra care, we can also choose the cap-basis in <ref> to be a level-injective ω-poset with levels among some prescribed family of covers.
Any countable family 𝒞 of minimal open covers of a compact 𝖳_1 space X that is coinitial (w.r.t. refinement) among all covers of X has a subfamily that forms the levels of a level-injective ω-cap-basis.
Like in the proof of <ref>, let (C_n)_n∈ω enumerate 𝒞 and define (n_k)_k∈ω as follows. Let n_0 be arbitrary. If n_k has been defined then note that, for any x∈ X, we have p∈ C_n_k and q∈ C_k with x∈ p∩ q. If p∩ q={x} then set b_x={x}. Otherwise we may take away a point of p∩ q (as X is 𝖳_1) to obtain open b_x with x∈ b_x⫋ p∩ q. As C_n_k is minimal, this implies that no subset of b_x lies in C_n_k. Now (b_x)_x∈ X is an open cover of X which must then have a refinement C_n_k+1, for some n_k+1. Thus C_n_k+1 refines both C_n_k and C_k, with the additional property that C_n_k+1∩ C_n_k consists only of singletons. As in the proof of <ref>, this implies that ℙ=⋃_k∈ωC_n_k is a cap-basis for X and that each C_n_k also corefines C_n_k+1. Moreover, each C_n_k is a minimal cover and hence a minimal cap in ℙ. Thus ℙ is a level-injective ω-poset with levels ℙ_k=C_n_k, by <ref>.
If we want the levels of an ω-poset to determine not just the caps but even the bands then we need a slight strengthing of level-injectivity. To describe this, let us introduce some more terminology and notation.
Take a poset (ℙ,≤). The intervals defined by any p,q∈ℙ will be denoted by
(p,q) =p^<∩ q^>={r∈ℙ:p<r<q},
[p,q] =p^≤∩ q^≥={r∈ℙ:p≤ r≤ q}.
We call p a predecessor of q (and q a successor of p) if p is a maximal element strictly below q. The resulting predecessor relation will be denoted by ⋖, i.e.
p⋖ q ⇔ p<q and (p,q)=∅ ⇔ p≠ q and [p,q]={p,q}.
We call ℙ predetermined if, for all p∈ℙ,
Predeterminedp^>≠∅ ⇒ ∃ q<p (q^<⊆ p^≤).
Equivalently, q<p and q^<⊆ p^≤ could be written just as q^<=p^≤. Also note this implies (q,p)=∅ and hence q⋖ p, i.e. q is necessarily a predecessor of p. In other words, ℙ is predetermined precisely when every non-atomic element of ℙ has a `predecessor which determines its upper bounds'.
Predetermined ω-posets can also be characterised as follows.
If ℙ is an ω-poset then the following are equivalent.
* ℙ is predetermined.
* Every non-minimal p∈ℙ is a band of q^<, for some q∈ℙ.
* For every p∈ℙ and n≥𝗋(p), we have q∈ℙ_n with q^≤=[q,p]∪ p^<.
* Every finite cap is a band, i.e. 𝖡ℙ=𝖢ℙ∩𝖥ℙ.
<ref>⇒<ref> If ℙ is predetermined then, for any non-minimal p∈ℙ, we have q⋖ p with q^<=p^≤. In particular, p is a band of q^<.
<ref>⇒<ref> Say every non-minimal p∈ℙ is a band of q^<, for some q∈ℙ. If ℙ is also Noetherian then (q,p) has a maximal element q', necessarily with q'^<=p^≤. This shows that ℙ is predetermined.
<ref>⇒<ref> If ℙ is a predetermined ω-poset then, for any p∈ℙ and n≥𝗋(p), we can recursively define q_k∈ℙ_k with q_k^≤=[q_k,p]∪ p^< as follows, for k≥ n. First set q_n=p. Now assume q_k has been defined. If q_k is an atom then we may simply set q_k+1=q_k. Otherwise, we have q_k+1 with q_k+1^<=q_k^≤ and hence 𝗋(q_k+1)=𝗋(q_k)+1. Thus q_k+1∈ℙ_k+1, as q_k∈ℙ_k, and
q_k+1^≤={q_k+1}∪ q_k^≤={q_k+1}∪[q_k,p]∪ p^<=[q_k+1,p]∪ p^<.
<ref>⇒<ref> Assume <ref> holds and take some finite cap C⊆ℙ. By <ref>, ℙ_n≤ C, for some n∈ω, and hence ℙ∖ℙ^n≤ℙ_n≤ C too. On the other hand, if p∈ℙ^n∖ℙ_n then <ref> yields q∈ℙ_n with q^≤=[q,p]∪ p^<. As ℙ_n≤ C, we have some c∈ C∩ q^≤⊆ p^≶, i.e. p∈ C^≶. This shows that C is a band.
<ref>⇒<ref> Assume ℙ is an ω-poset which is not predetermined, so we have some non-atomic p∈ℙ with q^<⊈ p^≤, for all q<p. Then we can take minimal n∈ω such that ℙ_n∩ p^>≠∅. For every q∈ℙ_n∩ p^>, pick q'∈ q^<∖ p^≤ and note that q'∉ p^≥ too, by the minimality of n. Thus C=(ℙ_n∖ p^>)∪{q':q∈ℙ_n∖ p^≤} is a finite cap, as ℙ_n≤ C, but not a band, as p∉ C^≶.
Every predetermined ω-poset is level-injective.
Every level of an ω-poset ℙ is a minimal band, being both a band and an antichain. If ℙ is also predetermined then any smaller cap would also be a band, by <ref> above, and hence each level is even minimal among all caps.
If X is a 𝖳_1 compactum and ℙ⊆𝖯X then
ℙ is an ω-band-basis ⇔ ℙ is a predetermined ω-cap-basis.
If ℙ is an ω-cap-basis of X then, by <ref> above, ℙ is predetermined if and only if every finite cover is a band, i.e. if and only if ℙ is actually a band-basis.
Using this, we can improve on <ref> by showing that every 𝖳_1 compactum even has an ω-band-basis (unlike the improvement in <ref>, however, we can not specify the potential levels of the ω-band-basis in advance).
First we need the following preliminary result.
For any basis B and finite open family C of a 𝖳_1 compactum X, there is a minimal cover D⊆ B of X and (x_d)_d∈ D⊆ X such that, for all d∈ D,
d=⋂{e∈ C∪ D:x_d∈ e} and d≠{x_d} ⇒ d∉ C.
For all F⊆ C, let us define
L_F ={x ∈ X: x^∈⊆ F}=X ∖⋃(C ∖ F).
X_F ={x ∈ X: x^∈ = F}=L_F∖⋃_G⫋ FL_G.
Note each L_F is closed and the (X_F)_F⊆ C are disjoint subsets covering X. We will recursively define further closed subsets K_F ⊆ X_F with minimal covers D_F⊆ B of K_F such that ⋃ D_F ⊆⋂ F, L_F⊆⋃⋃_G⊆ FD_G and
G⫋ F ⇒ K_F∩⋃ D_G=∅.
(Incidentally, it is quite possible for K_F to be empty for many F⊆ C, but this just means that D_F will also be empty.) If G⊈ F then, taking any g∈ G∖ F, we see that K_F∩⋃ D_G⊆ L_F∩ g=∅ too so (<ref>) can automatically be strengthened to
F≠ G ⇒ K_F∩⋃ D_G=∅.
Once we have constructed these sets we see that, whenever d∈ D_F, minimality means we have x_d∈ d∩ K_F∖⋃(D_F∖{d}). As K_F⊆ X∖⋃_G≠ FD_G, it follows that x_d∈ d∖⋃(D∖{d}), where D=⋃_G⊆ CD_G. As X=L_C⊆⋃⋃_G⊆ CD_G=⋃ D, this shows that D is a minimal cover of X and that x_d∈ e∈ D implies e=d. Moreover, x_d∈ e∈ C implies that e∈ F, as x_d∈ K_F⊆ X_F⊆ L_F, and hence d⊆⋃ D_F⊆⋂ F⊆ e. This proves the first part of (<ref>).
To perform the recursive construction, first let D_∅⊆ B be any minimal cover of K_∅=L_∅=X_∅=X∖⋃ C. Once K_G and D_G have been defined, for G⫋ F, we set K_F=L_F ∖⋃⋃_G⫋ FD_G⊆ L_F∖⋃⋃_G⫋ FL_G=X_F⊆⋂ F. By compactness, we then have a minimal cover D_F⊆ B of K_F with
⋃ D_F ⊆ ⋂ F ⊆ X∖⋃_G⫋ FX_G ⊆ X∖⋃_G⫋ FK_G.
As X is 𝖳_1, we can further ensure that d⫋⋂ F and hence d∉ C, for each d∈ D_F, unless K_F=⋂ F={x}, for some x∈ X, in which case the only option is D_F={{x}}. This ensures that the second part of (<ref>) also holds. Now just note
L_F ⊆ K_F∪⋃⋃_G⫋ FD_G ⊆ ⋃ D_F∪⋃⋃_G⫋ FD_G = ⋃⋃_G⊆ FD_G
so the recursive construction may continue.
Any countable basis of a 𝖳_1 compactum contains an ω-band-basis.
As in the proof of <ref>, let (C_n)_n∈ω enumerate all finite minimal covers of a 𝖳_1 compactum X coming from any given countable basis B. Recursively define finite minimal covers (B_n)_n∈ω as follows. Let D_k = C_k ∪⋃_j < k B_j.
By <ref> we have a minimal cover B_k ⊆ B, such that B_k ∩ D_k contains only singletons, as well as (x_b)_b∈ B_k⊆ X such that b=⋂{e∈ B_k∪ D_k:x_b∈ e}, for all b∈ B_k.
In other words, x_b∈ b⊆ e, for any e∈ B_k∪ D_k with x_b∈ e, so B_k refines C_k and B_j, for all j<k. As in the proof of <ref>, this implies ℙ=⋃_k∈ωB_k is a cap-basis and each B_k also corefines B_k+1. By construction, B_k+1∩ B_k contains only singletons, which are atoms in ℙ. Thus ℙ is an ω-poset with levels ℙ_k=B_k, by <ref>. Also, for every b∈ B_k, we have c∈ B_k+1 with x_b∈ c. Then c < a implies x_b∈ a ∈ B_j, for some j≤ k, and hence b≤ a. This shows that c^< ⊆ b^≤ and hence c^<=b^≤, as long as b is not an atom. Thus ℙ is also predetermined and hence an ω-band-basis, by <ref>.
§.§ Graded Posets
We call a Noetherian poset ℙ graded if the rank function maps intervals to intervals, i.e. for all p,q∈ℙ,
p<q ⇒ 𝗋[(p,q)]=(𝗋(q),𝗋(p)).
In particular, this means the rank function turns predecessors into successors, i.e.
p⋖ q ⇒ 𝗋(p)=𝗋(q)+1.
In fact, if every element of ℙ has finite rank then ℙ is graded precisely when this happens. This also makes it clear that every graded ω-poset is indeed weakly graded.
Hasse diagrams of atomless graded ω-posets can thus be viewed as Bratteli diagrams (see <cit.>) where the levels (ℙ_n) form the vertex sets and the edges come from the predecessor relation ⋖. Indeed, any Bratteli diagram with at most one edge between distinct vertices arises as the Hasse diagram of some atomless graded ω-poset. Whether one chooses to work with diagrams or posets is thus a matter of taste, although the diagram picture will be particularly instructive in future work when we construct graded ω-posets associated to interesting compacta (e.g. see <ref>).
Graded ω-posets are completely determined by the order relation between consecutive levels. As such, they are the strongest interpretation of what it means for a poset to be built from a sequence of finite levels. Naturally, we would like to construct bases of this special form. First we begin with some simple observations.
Let be a graded ω-poset.
* is level-injective if and only if is predetermined.
* The levels are pairwise disjoint if and only if is atomless.
* If is a basis of a 𝖳_1 space, then every level _n consolidates _n + 1.
The `if' part of (1) follows from <ref>. Conversely, suppose that is not predetermined so we have non-atomic p ∈_n such that q^< ∖ p^≤≠∅, for every q ∈_n + 1∩ p^>. Since is graded, we then have r∈_n∩ q^<∖ p^≤. It follows that _n ∖{p} is refined by _n + 1, and so _n is not a minimal cap.
As ℙ is graded and hence weakly graded, (2) is immediate.
To prove (3), take b ∈_n and let B = {c ∈_n + 1: c ⊆ b}. If we have x ∈ b ∖⋃ B, then we have u ∈ such that x ∈ u ⊆ b as is a basis. We may further assume that u ⊈c for every c ∈ B as the space is 𝖳_1. Hence, u ∈_m for some m > n. But since is graded, we get u ⊆ v ⊆ b for some v ∈_n + 1. Hence, x ∈ v ∈ B, which is a contradiction.
To ensure the cap-bases in <ref> are graded, we need the following.
Let (C_n) be a sequence of minimal covers of a set X with each C_n consolidating C_n+1 and C_n+1∩ C_n only containing singletons {{x}:x∈ X}. Further let ℙ=⋃_n∈ωC_n, considered as a poset with ≤=⊆. Then
* ℙ is a predetermined graded poset with n^th level ℙ_n=C_n, and
* if ℙ is a basis for a compact topology then ℙ is also an ω-cap-basis.
* First note C_n^≤=⋃_k≤ nC_k, for all n∈ω. Indeed, if we had c<d∈ C_k, for some k>n then, as C_k≤ C_n, we would have some c'∈ C_n with c<d≤ c', contradicting the minimality of C_n. In particular, as C_0=C_0^≤ is a minimal cover of X, it consists entirely of maximal elements of ℙ, i.e. elements of rank 0.
We claim that, for all n∈ω,
(C_n+1∖ C_n)^⋖ ⊆ C_n∖{{x}:x∈ X} ⊆ 𝗋^-1{n}.
For the first inclusion, take c∈(C_n+1∖ C_n)^⋖, which means we have d∈ C_n+1∖ C_n with d⋖ c. In particular, c∈ C_n+1^⋖ so we must have m≤ n with c∈ C_m. By minimality, we can choose some x∈ d∖⋃(C_n+1∖{d}). As each cover consolidates the next, we have c_m≥…≥ c_n+1 with c_m=c and x∈ c_k∈ C_k, for all k between m and n+1. By our choice of x, we must have c_n+1=d and hence c_n>d because d∈ C_n+1∖ C_n. In particular, c_n is not a singleton so other inequalities must be strict too, i.e. c=c_m>…>c_n+1=d. The only way would could have d⋖ c then is if m=n. This proves the first inclusion. The second now follows by induction – the n=0 case was observed above, while all successors of elements of C_n+1∖{{x}:x∈ X}⊆ C_n+1∖ C_n must lie in C_n∖{{x}:x∈ X} and hence have rank n, so all elements of C_n+1∖{{x}:x∈ X} have rank n+1.
In particular, each p∈ℙ has finite rank and all its successors p^⋖ have the same rank, proving that ℙ is graded. Also note that singletons persist as soon as they appear, i.e. if {x}∈ C_n then {x}∈ C_n+1, again because each cover consolidates the next. Thus each C_n consists precisely of the elements of rank n together with singletons (and hence minimal elements of ℙ) of smaller rank, i.e. C_n=ℙ_n. Finally, for any p∈ℙ we can again take x∈ p∖⋃(C_𝗋(p)∖ p). If p is not minimal, we can then take q∈ C_𝗋(p)+1 with x∈ q<p and show that q^<=p^≤, which means that ℙ is predetermined.
* Now assume ℙ is also a basis for a compact topology. In particular, each minimal cover C_n must be finite and hence ℙ is an ω-poset. We claim that, moreover, every cover C⊆ℙ must be refined by some level C_n. Indeed, by compactness, we can replace C with a finite subset if necessary. As each level is a consolidation of the next, we can further replace each non-atomic element of C having smallest rank with elements in a level below. Continuing in this manner, we eventually obtain a new cover D refining the original cover C whose elements are all contained in a single level C_n. As C_n is a minimal cover, D must then be the entirety of C_n, proving the claim. As levels are caps, this shows that ℙ is a cap-basis.
Note for ℙ to be graded here, not just Noetherian, it is crucial that each cover is not only refined by the next cover but also consolidates it, as the following shows.
Let X=[0,1] and define
C_1 ={[0,34),(14,1]},
C_2 ={[0,23),(13,1]},
C_3 ={[0,12),(14,23),(13,34),(12,1]}.
The Hasse diagram of the resulting poset (C_1∪ C_2∪ C_3,⊆) looks like this:
[y=(0, 2em)]
(00) at (0,0) [align=center][0,34);
(30) at (3,0) [align=center](14,1];
(0-2) at (0,-2) [align=center][0,23);
(3-2) at (3,-2) [align=center](13,1];
(0-4) at (0,-4) [align=center][0,12);
(1-4) at (1,-4) [align=center](14,23);
(2-4) at (2,-4) [align=center](13,34);
(3-4) at (3,-4) [align=center](12,1];
(00)–(0-2)–(0-4);
(30)–(3-2)–(3-4);
(0-2)–(1-4);
(30)–(1-4);
(3-2)–(2-4);
(00)–(2-4);
Note that C_3 refines C_2 which in turn refines C_1. However, C_3∋(14,23)⊆(14,1]∈ C_1 even though there is no element of C_2 in between, i.e. C_1∪ C_2∪ C_3 is not graded.
Using <ref> we can construct graded ω-band-bases.
Every second countable 𝖳_1 compactum has a graded ω-band-basis.
We modify the proof of <ref> so we can use <ref>. To start with, again take any countable basis B for 𝖳_1 compactum X and let (B_n)_n∈ω enumerate all finite covers of X from B. Recursively define another sequence of finite open covers (C_n) as follows. Let C_0={X}. If C_n has been defined then, for each x∈ X, let
d_x=⋂{a∈ B_n∪ C_n:x∈ a}.
As B_n and C_n are finite, so is D={d_x:x∈ X}. For each d∈ D, choose some x_d such that d=d_x_d and denote the set of all the other chosen points by
f_d=⋃_e∈ D∖{d}{x_e}.
We then have a minimal open cover refining both B_n and C_n given by
E={d∖ f_d:d∈ D}.
Also note that if y∈ c∈ C_n then y≠ x_d for any d≠ d_y (because y=x_d implies d_y=d_x_d=d) so y∈ d_y∖ f_d_y⊆ c. This shows that c=⋃(E∩ c^≥), for all c∈ C_n, i.e. C_n consolidates E. At this stage it is possible that there could be some non-singleton c∈ C_n∩ E. However, this can only happen when c is contained in some b∈ B_n and disjoint from all other subsets in (B_n∖{b})∪ C_n and hence E – otherwise we would have some d∈ D with d⫋ c and so certainly d∖ f_d⫋ c, while all other elements of E would avoid x_d∈ d⊆ c. For any non-singleton c∈ C_n∩ E, we can thus pick arbitrary distinct y_c,z_c∈ c and replace c with c∖{y_c} and c∖{z_c} without destroying the minimality of E. In other words, to ensure consecutive covers can only contain singletons, we define C_n+1 by
C_n+1=E∖ C_n∪⋃_c∈ E∩ C_n{c∖{y_c},c∖{z_c}}.
This completes the recursion and the poset ℙ=⋃_n∈ωC_n is then a predetermined graded ω-poset, by <ref>. As X is compact, every cover of X from ℙ is refined by B_n and hence C_n+1, for some n∈ω. As in the proof of <ref>, ℙ is then an ω-cap-basis and hence an ω-band-basis, by <ref>.
Unlike <ref>, we can not choose the graded ω-band-bases above to lie within some basis given in advance. Indeed, the following result shows that most Hausdorff compacta have bases which do not contain any graded ω-basis.
As usual, we view any ordinal α as a topological space with respect to the interval topology, i.e. generated by the basis (β,γ)={δ:β<δ<γ}, for all β < γ≤α.
For any Hausdorff compactum X, the following are equivalent.
* X is homeomorphic to α, for some α<ω^2.
* Every basis for X contains a graded ω-(cap-)basis.
If X=α<ω^2 then any basis for X contains a basis ℙ such that each p∈ℙ is either a singleton or contains a unique non-zero limit ordinal ω(n+1) such that p⊆(ω n,ω(n+1)+1).
Taking a further subset if necessary, we can ensure that the neighbourhoods of any fixed non-zero limit ordinal ω n are linearly ordered and hence T_n={p∈ℙ:p⊆(ω n,ω(n+1)+1)} consists of atoms together with at most one decreasing sequence. In particular, each T_n and is graded and hence ℙ={{0}}∪⋃_ω n+1∈αT_n is a graded ω-cap-basis.
Conversely, say X is not homeomorphic to any α<ω^2. We can further assume that X is second countable (otherwise X certainly could not have any ω-basis and we would be done). We claim that the non-isolated points of X have some limit point y∈ X. Indeed, X=Y∪ S, for (unique) perfect Y and countable scattered S. If Y≠∅ then just take any y∈ Y. If Y=∅ then X=S must be homeomorphic to some ordinal α>ω^2 and we can just take y to be (the point identified with) ω^2, which is the limit of (ω(n+1))_n∈ω. This proves the claim and it follows that y has a neighbourhood basis consisting of non-closed open sets – if O is a clopen neighbourhood of y just take any non-isolated z∈ O∖{y} and note that O∖{z} is still open but no longer closed. These neighbourhoods of y together with all open sets avoiding y thus form a basis B for X. As X is Hausdorff and hence regular, we can argue as in the proof of <ref> to obtain another basis ℙ⊆ B such that strict containment implies closed containment (just choose each b∋ x there so that cl(b)⊆⋂{c∈⋃_j≤ kC_n_j:x∈ c}), i.e.
p⫋ q ⇒ cl(p)⊆ q.
As each p∈ℙ containing y is not closed, p can never be the union of a finite subset of ℙ∖{p} (as p would then be the union of their closures too and hence itself closed). In particular, ℙ can not contain a graded ω-basis, as each level would then have to consolidate the next, by <ref>.
The following summarises <ref>, <ref>, and <ref>.
Every second countable 𝖳_1 compactum X has an ω-cap-basis .
Moreover, we can arrange any of the following (but not any two simultaneously).
* is level-injective and the levels _n are members of a given co-initial family of minimal open covers.
* is predetermined and its elements are members of a given countable basis.
* is predetermined and graded.
§.§ Additional Properties
Before moving on, let us examine some other simple order properties possessed by all cap-bases of 𝖳_1 spaces. Specifically, let us call a poset ℙ branching if no principal down-set p^> has a singleton band, i.e.
Branchingp<q ⇒ ∃ r<q (p≰ r and r≰ p).
In particular, this implies no p∈ℙ has a unique predecessor, so the Hasse diagram of ℙ does indeed branch as much as possible. This even characterises branching posets among ω-posets or, more generally, posets which only have finite intervals.
Any basis of non-empty open sets of a 𝖳_1 space is branching.
Take a basis ℙ of non-empty sets of a 𝖳_1 space X. For any p,q∈ℙ with p<q, we can take x∈ p and y∈ q∖ p. We then have some r∈ℙ with y∈ r⊆ q∖{x}. Note p≰ r, as x∈ p∖ r, and r≰ p, as y∈ r∖ p. This shows that ℙ is branching.
In particular, every poset arising in <ref> is branching. It is natural to wonder if this is the only extra restriction, i.e. does every branching predetermined ω-poset arise as a cap-basis of some (necessarily compact) 𝖳_1 space? In fact, this will even hold under a certain weaker assumption which we now describe.
First let us define the cap-order relation ≾ on 𝖯ℙ by
Cap-OrderQ≾ R ⇔ ∀ F⊆ℙ (F∪ Q∈𝖢ℙ ⇒ F∪ R∈𝖢ℙ).
Note it suffices to consider finite F here, as every cap has a finite subcap. Further note that ≾ is a preorder containing refinement as a subrelation. In particular, on singletons it contains the original order ≤. Let us call a poset ℙ cap-determined if it actually agrees with ≤ on singletons, i.e. for all p,q∈ℙ,
Cap-Determinedp≾ q ⇒ p≤ q.
More explicitly this means that, whenever p≰ q, we have some F⊆ℙ (which we can take to be finite) such that F∪{p} is a cap but F∪{q} is not.
Every cap-basis of a 𝖳_1 space is cap-determined.
Take a cap-basis ℙ of a 𝖳_1 space X. Whenever p≰ q, we have some x∈ p∖ q. As X is 𝖳_1 and ℙ is a basis, we can cover X∖ p with a subcollection F⊆ℙ whose elements all avoid x. Thus F∪{p} is a cover of X and hence a cap of ℙ. On the other hand, F∪{q} does not contain x so it can not be a cover of X and is thus not a cap of ℙ, by (<ref>). This shows that ℙ is cap-determined.
The relationship between these various notions can be summarised as follows.
If ℙ is an ω-poset then
ℙ is branching and predetermined ⇒ ℙ is cap-determined ⇒ ℙ is branching.
For the first implication, assume ℙ is predetermined and take any p∈ℙ. We claim that we can recursively construct (p_n)_n≥𝗋(p) such that p_n∈ℙ_n and p is a band of p_n^≤, for all n≥𝗋(p). First set p_𝗋(p)=p. Now assume p_n has been already been constructed. If p_n is already minimal in ℙ then it must lie in all levels beyond n too and we may simply set p_n+1=p_n. Otherwise, we can take p_n+1⋖ p_n such that p_n+1^<=p_n^≤, as ℙ is predetermined, noting that this implies 𝗋(p_n+1)=𝗋(p_n)+1 (otherwise we would have q⋗ p_n+1 with 𝗋(q)=𝗋(p_n+1)-1>𝗋(p_n) so q≱ p_n, a contradiction). As p is a band for p_n^≤=p_n+1^<, it is also a band for p_n+1^≤=p_n+1^<∪{p_n+1}. This completes the recursion.
Now say that p≰ q. First consider the case where q≰ p as well. Let F=ℙ_𝗋(p)∖{p} so certainly F∪{p} is a cap. However, F∪{q} is not refined by ℙ_n, for any n>𝗋(p), because ℙ_n contains the p_n constructed above, which can not be below any element of F∪{q}, as none of these are comparable with p. Thus F∪{q} is not a cap, by <ref>. On the other hand, if q<p then, as long as ℙ is branching, we can take r<p which is incomparable with q. The argument just given then yields F such that F∪{r} and and hence F∪{p} is a cap while F∪{q} is not. This shows that ℙ is cap-determined.
For the second implication, assume ℙ is cap-determined. So if p<q then we have F⊆ℙ such that F∪{q} is a cap but F∪{p} is not. Take any n>𝗋(p) such that ℙ_n refines F∪{q}. As F∪{p} is not a cap, we have r∈ℙ_n∖(F∪{p})^≥. In particular, r≰ p but also p≰ r, as 𝗋(p)<n≤𝗋(r). Moreover, r≰ f, for all f∈ F, and hence r≤ q, as ℙ_n refines F∪{q}. This shows that ℙ is branching.
Even when ℙ is not cap-determined, B≾ C is meant to signify that B is covered by C in a certain sense, which we will make more precise in (<ref>) below. For the moment, let us just note a few further properties of ≾. Firstly, as one would expect, the caps of ℙ are precisely the maximal elements with respect to ≾, i.e.
C∈𝖢ℙ ⇔ ℙ≾ C.
Indeed, if C is a cap then B≾ C, for any B⊆ℙ, as every superset of a cap is a cap (in particular, we can take B=ℙ). On the other hand, if C=C∪∅ is a cap and C≾ A then A=A∪∅ is also a cap (in particular, we can take C=ℙ).
We also immediately see that the empty set ∅ is minimal with respect to ≾, although in general there can be elements of ℙ that are minimal too. However,
ℙ is cap-determined ⇒ ∀ p∈ℙ (p≾̸∅).
Indeed, if p≾∅ then p≾ q, for all q∈ℙ, so if ℙ is cap-determined then p is a minimum of ℙ, i.e. ℙ=p^≤. But then {p} itself is already a band and hence a cap, even though the empty set ∅ is never a cap, contradicting p≾∅.
Lastly, we show that ≾ is determined by its restriction to singletons on the left.
For any poset ℙ and B,C⊆ℙ,
B≾ C ⇔ ∀ b∈ B (b≾ C).
First let us note that ≾ respects pairwise unions, i.e. for all A,B,C⊆ℙ,
A,B≾ C ⇒ A∪ B≾ C.
To see this, take any F⊆ℙ such that A∪ B∪ F∈𝖢ℙ. If A≾ C then this implies that B∪ C∪ F∈𝖢ℙ. If B≾ C too then this further implies that C∪ F=C∪ C∪ F∈𝖢ℙ. This shows that A∪ B≾ C.
Now if B≾ C then certainly b≾ C, for all b∈ B. Conversely, if B≾̸C then we have some D⊆ℙ such that B∪ D∈𝖢ℙ but C∪ D∉𝖢ℙ. We then have some finite F⊆ B such that F∪ D is still a cap and hence F≾̸C. If we had f≾ C, for all f∈ F, then (<ref>) would imply F≾ C, a contradiction. Thus f≾̸C, for some f∈ F⊆ B, as required.
We summarize implications between considered properties of ω-posets in <ref>.
The notion of a prime poset is defined in <ref> in the next section.
§ THE SPECTRUM
In this section, we construct a 𝖳_1 compactum from any poset and relate its topological properties to the order properties of the original poset.
Throughout this section fix some poset (ℙ,≤).
§.§ Selectors
It will be convenient to let ⋒ denote the overlap relation, i.e.
A⋒ B ⇔ A∩ B≠∅.
We call S⊆ℙ a selector if it overlaps all caps,[A set which overlaps every set in a given family is sometimes called a transversal of the family. In this terminology, a selector is just a transversal of the family of all caps.] i.e.
SelectorC∈𝖢ℙ ⇒ S⋒ C.
Equivalently, S⊆ℙ is a selector precisely when its complement ℙ∖ S is not a cap (as being a cap and containing a cap are the same thing).
We will be particularly interested in minimal selectors.
Every selector contains a minimal selector.
Note that every cap C contains a finite subcap – bands are finite by definition so if B is a band refining C then we can simply choose a finite subset of C that is still refined by B. For S to be a selector, it thus suffices for S to select elements from just the finite caps. The intersection of a chain of selectors is therefore again a selector so Kuratowski–Zorn implies every selector contains a minimal selector.
The first thing to note about minimal selectors is the following.
Every minimal selector is an up-set.
Take a minimal selector S⊆ℙ. Minimality means that, for every s∈ S, we have some C∈𝖢ℙ such that S∩ C={s} (otherwise S∖{s} would be a strictly smaller selector). For any p≥ s, note that (C∖{s})∪{p} is refined by C and is thus also a cap. As S must also overlap this new cap, the only possibility is that S also contains p. This shows that S^≤=S, i.e. S is an up-set.
Moreover, to verify that an up-set is a selector, it suffices to consider a subfamily of caps ℬ⊆𝖢ℙ that is coinitial with respect to refinement, e.g. the bands 𝖡ℙ or even just the levels (ℙ_n) if ℙ is an ω-poset, thanks to <ref>.
Take an up-set U⊆ℙ. For any coinitial ℬ⊆𝖢ℙ,
U is a selector ⇔ U overlaps every B∈ℬ.
If ℙ is an ω-poset, U is a selector precisely when U is infinite or contains an atom.
As ℬ⊆𝖢ℙ, ⇒ is immediate. Conversely, say U∩ B, for all B∈ℬ. For any C∈𝖢ℙ, coinitiality yields B∈ℬ refining C. This means any b∈ B∩ U has an upper bound c∈ C, which is thus also in U, as U is an up-set. Thus U is a selector.
Next note that if a∈ℙ is an atom then a^≤ is a selector. Indeed, for any band B∈𝖡ℙ, the minimality of a implies a∈ B^≥ and hence B⋒ a^≤. As a^≤ is up-set and bands are coinitial in 𝖢ℙ, we are done.
It follows that if U contains an atom then U is a selector. Now assume ℙ is an ω-poset. If U is infinite then U contains elements of arbitrary rank. In particular, U overlaps all levels of ℙ, which are coinitial by <ref>, showing that U is again a selector. Conversely, if U is finite and contains no atoms then we have a level of ℙ which is disjoint from U, showing U is not a selector.
§.§ Spectra
Let us define the power space of ℙ as the power set 𝖯ℙ with the topology generated by the subbasis (p_𝖯^∈)_p∈ℙ, where
p_𝖯^∈={S∈𝖯ℙ:p∈ S}.
Equivalently, this is the topology we get from identifying every S⊆ℙ with its characteristic function χ_S∈2^ℙ, where 2={0,1} is the Sierpiński space (where {1} is open but {0} is not) and 2^ℙ is given the usual product topology.
We will be particularly interested in the subspace of minimal selectors.
The spectrum is the subspace of 𝖯ℙ consisting of minimal selectors
𝖲ℙ={S⊆ℙ:S is a minimal selector}.
So 𝖲ℙ has a subbasis consisting of the sets p_𝖲^∈=p_𝖯^∈∩𝖲ℙ, for p∈ℙ. From now on we will usually drop the subscript and just write p_𝖲^∈ as p^∈.
By <ref>, minimal selectors are always up-sets so, for all p,q∈ℙ,
p≤ q ⇒ p^∈⊆ q^∈.
We can thus view the sets (p^∈)_p∈ℙ as a more concrete representation of the poset ℙ as a subbasis of a topological space. However, this representation may not always be faithful, at least with respect to the original ordering, i.e. it is possible to have p^∈⊆ q^∈ even when p≰ q. It is even possible for p^∈ to be empty, for some p∈ℙ.
For example, consider the graded ω-poset ℙ=ω×{0,1} where
(n,δ)≤(n',δ') ⇔ n'≤ n and δ'≤δ.
The levels of ℙ are then given by ℙ_0={(0,0)} and ℙ_n={(n,0),(n-1,1)}, for all n>0. The only minimal selector is then ω×{0} so (n,1)^∈=∅, for all n∈ω.
The representation p↦ p^∈ will, however, be faithful with respect to ≾, as defined in (<ref>). In particular, it will be faithful with respect to the original order precisely when ℙ is cap-determined.
For any A,B⊆ℙ,
A≾ B ⇔ ⋃_a∈ Aa^∈⊆⋃_b∈ Bb^∈.
By (<ref>), it suffices to consider a singleton A={a}.
Now take a minimal selector S∈ a^∈. Minimality means we have a cap C∈𝖢ℙ such that C∩ S={a}. If a≾ B then it follows that B∪(C∖{a}) is a cap and hence B∩ S=(B∪(C∖{a}))∩ S≠∅, as S is a selector, i.e. S∈⋃_b∈ Bb^∈. This shows that a^∈⊆⋃_b∈ Bb^∈.
Conversely, if a≾̸B then we have F⊆ℙ such that {a}∪ F is a cap but B∪ F is not. This means ℙ∖(B∪ F) is a selector and hence contains a minimal selector S, by <ref>. As {a}∪ F is a cap and F is disjoint from S, it follows that a∈ S so S∈ a^∈∖⋃_b∈ Bb^∈, as B is disjoint from S, i.e. S witnesses a^∈⊈⋃_b∈ Bb^∈.
For any C⊆ℙ, we denote the corresponding family of open sets in 𝖲ℙ by
C_𝖲={c^∈:c∈ C}.
The map p↦ p^∈ is an order isomorphism from ℙ onto the canonical subbasis ℙ_𝖲 of the spectrum precisely when ℙ is cap-determined.
For any p,q∈ℙ we have
p≤ q ⇒ p≾ q ⇔ p^∈⊆ q^∈
by (<ref>) and former observations.
The remaining implication p ≤ q ⇐ p ≾ q is equivalent by definition to being cap-determined.
<ref> yields the first fundamental properties of the spectrum.
The spectrum is a compact 𝖳_1 space. Moreover, C⊆ℙ is a cap precisely when the corresponding subbasic sets C_𝖲 cover the whole spectrum.
Given any distinct S,T∈𝖲ℙ, minimality implies that we have s∈ S∖ T and t∈ T∖ S. This means S∈ s^∈∌T and T∈ t^∈∌S, showing that 𝖲ℙ is 𝖳_1.
By (<ref>), C⊆ℙ is a cap precisely when ℙ≾ C, which is equivalent to saying C_𝖲 covers the entire spectrum, by (<ref>). As every cap contains a finite subcap, X is compact, by the Alexander–Wallman subbasis lemma (see <cit.> or <cit.>).
The spectrum can also recover a space from the order structure of a cap-basis.
If ℙ is a cap-basis of a 𝖳_1 space X then
x↦ x^∈={p∈ℙ:x∈ p}
is homeomorphism from X onto 𝖲ℙ.
Take any x∈ X. By assumption, any cap C∈𝖢ℙ is a cover of X and hence we have some c∈ C containing x, i.e. c∈ x^∈∩ C. This shows that x^∈ is a selector. Now take any p∈ x^∈. For any y∈ X∖ p, we have some q∈ y^∈∖ x^∈, as X is 𝖳_1. This means C={p}∪(ℙ∖ x^∈) is a cover of X and hence a cap of ℙ with C∩ x^∈={p}. Thus x^∈ is a minimal selector.
On the other hand, for any selector S∈𝖲ℙ, we know that ℙ∖ S can not cover X (otherwise it would be cap with S∩(ℙ∖ S)=∅, a contradiction). So we can pick x∈ X not covered by ℙ∖ S, which means x^∈⊆ S. If S is a minimal selector then this implies x^∈= S. This shows that 𝖲ℙ={x^∈:x∈ X}. Also x≠ y implies x^∈≠ y^∈, as X is 𝖳_1, so x↦ x^∈ is a bijection from X onto 𝖲ℙ.
Finally, note that x↦ x^∈ maps each p∈ℙ onto p^∈, as
x∈ p ⇔ p∈ x^∈ ⇔ x^∈∈ p^∈.
As ℙ is a (sub)basis of X and (p^∈)_p∈ℙ is a subbasis of the spectrum 𝖲ℙ, this shows that the map x↦ x^∈ is a homeomorphism from X onto 𝖲ℙ.
Spectra thus yield a large class of spaces.
Every second countable compact 𝖳_1 space arises as the spectrum of some predetermined branching graded ω-poset.
By <ref>, any second countable compact 𝖳_1 space X has a graded ω-band-basis ℙ, which is predetermined, by <ref>, and branching, by <ref>.
Moreover, its spectrum 𝖲ℙ is homeomorphic to X, by <ref>.
Any graded ω-poset is determined by the order relation between consecutive levels. By <ref>, we should therefore be able to construct any second countable compact 𝖳_1 space by recursively defining relations between finite sets ℙ_0,ℙ_1,… and then looking at the spectrum of the resulting poset ℙ=⋃_n∈ωℙ_n. The exact nature of the construction will of course depend on the space we wish to construct, as we will soon see in the examples of the next subsection. In future work, we will examine more examples constructed within the framework of Fraïssé theory as it applies to certain subcategories of relations between graphs.
In <ref>, we saw how graded posets arise from consolidations. Conversely, levels of graded posets correspond to consolidations in the spectrum.
Recall that B◂ C means that C is a consolidation of B. Also note that ℙ_n𝖲 below refers to the _𝖲 operation applied to the n^th level of ℙ, i.e. ℙ_n𝖲={p^∈:p∈ℙ_n}.
If ℙ is a graded ω-poset then ℙ_n𝖲◂ℙ_m𝖲 whenever m≤ n.
If ℙ is an ω-poset and m≤ n then certainly ℙ_n≤ℙ_m and hence ℙ_n𝖲≤ℙ_m𝖲. Now take any p∈ℙ_m𝖲. For any S∈ p^∈, minimality yields C∈𝖲ℙ with C∩ S={p}. Then we have k≥ n with ℙ_k≤ C and hence ℙ_k∩ S⊆ p^≥. As ℙ is graded, for any q∈ℙ_k∩ S, we have r∈ℙ_n∩(q,p)⊆ q^≤⊆ S and hence S∈ r^∈⊆ p^∈. Thus p^∈=⋃{r^∈:r∈ℙ_n∩ p^≥}, showing that ℙ_m𝖲 consolidates ℙ_n𝖲.
Before moving on, however, let us make a couple more observations about spectra arising from general ω-posets. The first thing to note is that every element S of the spectrum of an ω-poset is not just an up-set but even a filter, i.e.
Filterp,q∈ S ⇔ ∃ r∈ S (r≤ p,q)
(note ⇒ means S is down-directed while ⇐ just means S is an up-set).
If ℙ is an ω-poset then every S∈𝖲ℙ is a filter.
Assume ℙ is an ω-poset and take a minimal selector S∈𝖲ℙ. For any q,r∈ S, we have caps C,D∈𝖢ℙ such that C∩ S={q} and D∩ S={r}. By <ref>, C and D are refined by levels of ℙ. As the levels are linearly ordered by refinement, we can find a single level L∈𝖢ℙ which refines both C and D. As S is a selector, we can take s∈ S∩ L. As L refines C and D, we have c∈ C and d∈ D such that s≤ c,d and hence c,d∈ s^≤⊆ S. But q and r are only elements of S in C and D respectively so q=c≥ s and r=d≥ s, which shows that S is down-directed. By <ref>, S is also an up-set.
If ℙ is an ω-poset then ℙ_𝖲 is a basis for 𝖲ℙ.
Whenever S∈ p^∈∩ q^∈, we have r∈ S with r≤ p,q, by <ref>. But this means S∈ r^∈⊆ p^∈∩ q^∈, showing that ℙ_𝖲 is a basis.
This yields a kind of converse to <ref>.
Any cap-determined ω-poset ℙ arises as a cap-basis of a 𝖳_1 space.
Immediate from <ref>, <ref> and <ref>.
§.§ Examples
Our spectrum generalises the well-known construction of a metrisable Stone space from the branches of an ω-tree. Of course, the advantage of our spectrum, as applied to more general graded ω-posets, is that we can also construct connected spaces, the simplest example being the arc.
Let X be the arc, which we can take to be the unit interval [0,1] in its usual topology. Define open covers (C_n) of X by
C_n = {int([(k - 1)/2^n+1, (k + 1)/2^n + 1]): 1 ≤ k ≤ 2^n + 1 - 1}.
So each C_n consists of 2^n+1-1 evenly spaced intervals, each of length 2^-n. Then ℙ=⋃_n∈ωC_n is a predetermined graded ω-poset, by <ref>, which can also be seen directly from its Hasse diagram, as drawn below.
(00) at (0,0) [align=center][0,1];
(-2-2) at (-2,-2) [align=center][0,12);
(0-2) at (0,-2) [align=center](14,34);
(2-2) at (2,-2) [align=center](12,1];
(-3-4) at (-3,-4) [align=center][0,14);
(-2-4) at (-2,-4) [align=center](18,38);
(-1-4) at (-1,-4) [align=center](14,12);
(0-4) at (0,-4) [align=center](38,58);
(1-4) at (1,-4) [align=center](12,34);
(2-4) at (2,-4) [align=center](58,78);
(3-4) at (3,-4) [align=center](34,1];
(00)–(-2-2);
(00)–(0-2);
(00)–(2-2);
(-2-2)–(-3-4);
(-2-2)–(-2-4);
(-2-2)–(-1-4);
(0-2)–(-1-4);
(0-2)–(0-4);
(0-2)–(1-4);
(2-2)–(1-4);
(2-2)–(2-4);
(2-2)–(3-4);
(0-4.7) at (0,-4.7) [align=center]⋮;
Note ℙ is a cap-basis, by <ref> (or <ref> <ref>). By <ref>, the spectrum of ℙ then recovers the original space X, i.e. the arc. A more combinatorial construction of the arc could thus proceed as follows – first define relations between finite linearly ordered sets ℙ_n such that each element of ℙ_n is related to 3 consecutive elements of ℙ_n+1 and consecutive pairs in ℙ_n are related to exactly 1 common element in ℙ_n+1. Then let ℙ=⋃_n∈ωℙ_n with the order defined from the relations between consecutive ℙ_n's. Finally, define the arc as the spectrum of ℙ.
The following example shows that a basis forming a cap-determined poset is not necessarily a cap-basis, i.e. the converse of <ref> is not true (although the poset will yield a cap-basis of a different space, namely its spectrum, which will be a quotient of the original space – see <ref> below).
Let Y be the unit circle in the complex plane with the usual topology, and let θ→ Y be the covering map x ↦ e^2π i x, so the restriction θ [0, 1] → Y is the quotient map identifying the end-points.
We define open covers (D_n) of Y by
D_n = {θ((k-1)/2^n + 1,(k + 1)/2^n + 1): 0 ≤ k < 2^n + 1}
So each D_n for n ≥ 1 consists of 2^n + 1 evenly spaced arcs of length 2π / 2^n.
In particular, D_1 consists of the images of the intervals (-1/4, 1/4), (0, 1/2), (1/4, 3/4), and (1/2, 1).
We put D_0 = {X} and = ⋃_n ∈ω D_n.
As in the previous example, is a predetermined graded ω-poset and a cap-basis of Y so the spectrum of recovers the space Y, i.e. the circle.
Let C'_n = C_n ∪{[0, 1/2^n+1) ∪ (1 - 1/2^n + 1, 1]} where C_n is the cover of the arc X = [0, 1] from the previous example for n ≥ 1 and C'_0 = C_0 = {X}, and let ' = ⋃_n ∈ω C'_n.
Observe that p ↦int(θp) is an isomorphism of posets ' →.
It follows that ' is a cap-determined poset and an open basis of X (as it contains the original cap-basis ), but is not a cap-basis of X (as its spectrum is the circle and not the arc).
Our primary interest is in Hausdorff spaces, but our spectrum can indeed produce more general 𝖳_1 spaces. Some of these are not even sober(⇔ each irreducible closed set has a unique dense point), like the cofinite topology on a countably infinite set.
Let X = ω with the cofinite topology (i.e. non-empty open sets are exactly the cofinite ones), and let = {p_n, i: i ≤ n, n ∈ω} where p_n, i = (ω∖ n + 1) ∪{i}, so p_0, 0 = ω, p_1, 0 = ω∖{1}, p_1, 1 = ω∖{0}, p_2, 0 = ω∖{1, 2}, p_2, 1 = ω∖{0, 2}, p_2, 2 = ω∖{0, 1}, and so on.
Clearly, {p_n, i: n > i} is an open basis at i ∈ X, and so is an open basis of X.
Every p_n, i, i ≤ n ∈ω, has exactly two immediate predecessors: p_n + 1, i and p_n + 1, n + 1, and so every p_n, i with i < n ≠ 0 has a unique immediate successor p_n - 1, i, while p_n, n with n ≠ 0 has all elements p_n - 1, i for i ≤ n - 1 as immediate successors.
It follows that is a predetermined branching atomless graded ω-poset, as shown in <ref>, with disjoint levels _n = {p_n, i: i ≤ n}.
The levels _n are minimal covers of X since p_n, i is the unique set containing i.
Also, every _n is a consolidation of _n + 1 since p_n, i = p_n + 1, i∪ p_n + 1, n + 1.
Altogether, is a cap-basis of X by <ref>.
However, this does not have an uncountable extension.
An uncountable set X with the cofinite topology has no cap-basis.
Suppose is a subbasis of X consisting of nonempty and so cofinite sets.
By the Δ-system lemma, there are pairwise disjoint finite sets R and F_α, α∈ω_1, such that 𝒟 = {X ∖ (R ∪ F_α): α∈ω_1}⊆.
Let B ⊆ be any band.
Since B is finite, there is b ∈ B comparable to uncountably many elements of 𝒟.
Since b is cofinite, it cannot be below uncountably many elements of 𝒟.
Hence, b is above uncountably many elements of 𝒟, and so b ⊇ X ∖ R.
It follows that the finite upwards closed family F = {b ⊆ X: b ⊇ X ∖ R} is a selector.
If was a cap-basis, a minimal selector contained in F would correspond to a point of X with a finite local (sub)basis, by <ref>, which is impossible.
The following gives an example of spectrum that is not a first-countable space.
Let κ be an infinite cardinal, let X = κ∪{∞} be the one-point compactification of κ with the discrete topology, and let = {F, κ∖ F: F ⊆κ finite}∖{∅} be the finite-cofinite algebra on κ minus the bottom element.
For every α∈κ, let S_α = {α}^⊆ be the principal filter generated by {α}, and let S_∞ be the family of all cofinite elements of .
We show that the map f X → defined by x ↦ S_x is a homeomorphism.
Since S_α is an up-set and {α} is an atom in , S_α is a selector as every band contains an element above {α}.
Moreover, S_α is a minimal selector since every subselector S ⊆ S_α has to overlap the band {{α}, κ∖{α}} and so contains {α}. By <ref> there is a minimal selector S' ⊆ S, and it is equal to S_α as it is an up-set (<ref>) containing {α}.
S_∞ is a selector since every band is finite and so has to contain a cofinite element.
For every finite F ⊆κ, the family {{α}: α∈ F}∪{κ∖ F} is a band, and so every selector either contains an atom {α}, or contains all cofinite elements.
Hence, S_∞ is a minimal selector, and = {S_α: α∈κ}∪{S_∞}.
We have already shown that f is a bijection.
Now for every finite F ⊆κ we have F^∈ = {S_α: α∈ F} and (κ∖ F)^∈ = {S_α: α∉ F}∪{S_∞}, and so the elements of correspond to basic open sets of X.
§.§ Subcompacta
Next we examine closed subsets of the spectrum.
Any Q⊆ℙ determines a closed subset of 𝖲ℙ given by
Q^⊇={S∈𝖲ℙ:S⊆ Q}.
If S∈cl(Q^⊇) then every subbasic neighbourhood p^∈ containing S must also contain some T∈ Q^⊇. In other words, for every p∈ S, we have T∈𝖲ℙ with p∈ T⊆ Q. Thus S⊆ Q, showing that S∈ Q^⊇ and hence cl(Q^⊇)=Q^⊇.
In fact, every closed subset of the spectrum arises in this way.
If ℙ is an ω-poset then the closure of any X⊆𝖲ℙ is given by
cl(X)=(⋃ X)^⊇.
By the previous result, (⋃ X)^⊇ is a closed subset of 𝖲ℙ which certainly contains X. Conversely, if S∈(⋃ X)^⊇ then, for any p∈ S, we have some T∈ X with p∈ T, i.e. every subbasic neighbourhood of S contains an element of X. However, if ℙ is an ω-poset then (p^∈)_p∈ℙ is actually a basis for 𝖲ℙ, by <ref>. Thus this shows that S∈cl(X), which in turn shows that cl(X)=(⋃ X)^⊇.
Let us call Q⊆ℙ prime if q≾̸ℙ∖ Q, for all q∈ Q, where ≾ is the relation from (<ref>).
Put another way, this means that Q must overlap every subset which is cap-above any element of Q, i.e. for all C⊆ℙ,
PrimeQ∋ q≾ C ⇒ Q⋒ C.
Indeed, if q≾̸ℙ∖ Q and q≾ C then C⊈ℙ∖ Q, i.e. Q⋒ C, showing that (<ref>) holds. Conversely, if Q∋ q≾ℙ∖ Q then ℙ∖ Q itself witnesses the failure of (<ref>).
It follows that any non-empty prime Q⊆ℙ is automatically a selector – if q∈ Q and C∈𝖢ℙ then certainly q≾ C and hence Q⋒ C. Actually, more is true.
Prime subsets are precisely the unions of minimal selectors.
Take a minimal selector S⊆ℙ. For every s∈ S, minimality yields F⊆ S∖ℙ such that F∪{s} is a cap. But S∖ℙ(=(S∖ℙ)∪ F) is not a cap, simply because S is a selector, so F witnesses s≾̸S∖ℙ. This shows that every minimal selector is prime and hence the same is true of any union of minimal selectors.
Conversely, take any prime Q⊆ℙ. For every q∈ Q, this means q≾̸ℙ∖ Q so we have S∈ q^∈∖⋃_p∈ℙ∖ Qp^∈, by (<ref>), and hence q∈ S⊆ Q. This shows that Q is a union of minimal selectors.
In fact, as long as ℙ is an ω-poset, the minimal selectors forming a prime subset Q determine the spectrum of Q when considered as an ω-poset in its own right.
If ℙ is an ω-poset and Q⊆ℙ is prime then 𝖲Q=Q^⊇.
First we claim that the atoms of any prime Q⊆ℙ must already be atoms in ℙ. Indeed, any q∈ Q is contained in some S∈ Q^⊇. If q is not an atom in ℙ then we have some level ℙ_n disjoint from q^≤. Making n larger if necessary, we may further assume that S∩ℙ_n⊆ q^≥, as S is a minimal selector, and hence ∅≠ S∩ℙ_n⊆ q^>, showing that q is not an atom in S⊆ Q. This proves the claim and hence 𝖢Q={C∩ Q:C∈𝖢ℙ}, by <ref>. But if S⊆ Q and C∈𝖢ℙ then S∩ C=S∩ C∩ Q≠∅, so it follows that S is a selector in ℙ if and only if S is a selector in Q. The same then applies to minimal selectors, i.e. 𝖲Q=Q^⊇.
In this way, prime subsets of ℙ correspond exactly to closed subsets of 𝖲ℙ.
If ℙ is an ω-poset then we have mutually inverse bijections between prime Q⊆ℙ and closed subsets X of the spectrum 𝖲ℙ given by
Q↦𝖲Q and X↦⋃ X.
By <ref>, <ref> and <ref>, X↦⋃ X and Q↦ Q^⊇=𝖲Q take prime selectors to closed subsets and vice versa. By <ref>, X=(⋃ X)^⊇ whenever X is closed. By <ref> again, Q=⋃(Q^⊇) whenever Q is prime. Thus these maps are inverse to each other.
The frame of open subsets of 𝖲ℙ can thus be obtained directly from ℙ. Specifically, complements of prime subsets ordered by inclusion form a frame 𝔽 which is order isomorphic to the open subsets of 𝖲ℙ, by <ref>. Thus 𝔽 can be viewed as a kind of completion of ℙ, once we identify each p∈ℙ with p^≿∈𝔽.
Let us call a poset ℙ prime if it is prime in itself, i.e. if p^∈≠∅ or, equivalently, p≾̸∅, for all p∈ℙ.
While there do exist non-prime ω-posets (e.g. ℙ=ω×{0,1} mentioned just before <ref>), every ω-poset ℙ contains a prime ω-subposet ⋃𝖲ℙ with exactly the same spectrum, by <ref>. Also, cap-determined ω-posets are necessarily prime, by (<ref>), as are level-injective ω-posets.
Every level-injective ω-poset is prime.
If ℙ is level-injective then every level of ℙ is a minimal cap. For any p∈ℙ, this means ℙ_𝗋(n)∖{p} is not a cap and hence p≾̸∅, showing that ℙ is prime.
If ℙ is a prime ω-poset then ℙ_𝖲 is an ω-cap-basis for 𝖲ℙ.
We already showed that ℙ_𝖲 is a basis in <ref>.
Now take a cover of 𝖲ℙ from ℙ_𝖲, i.e. of the form C_𝖲, for some C⊆ℙ. Then C is a cap of ℙ, by <ref>, and is thus refined by some band B⊆ℙ. This implies B_𝖲 is a band of ℙ_𝖲 which refines C_𝖲, showing that C_𝖲 is a cap of ℙ_𝖲. Conversely, caps of ℙ_𝖲 are covers, by <ref>, seeing as p^∈≠∅, for all p∈ℙ, as ℙ is prime. This shows that ℙ_𝖲 is a cap-basis.
Next say that we have p∈ℙ and infinite Q⊆ℙ such that p^∈⫋ q^∈, for all q∈ Q. Then p∉ Q^≤⊆ S∋ p, for any S∈ p^∈, even though Q^≤ is an infinite up-set and hence a selector, contradicting the minimality of S. Thus ℙ_𝖲 is Noetherian and every element of ℙ_𝖲 has finite rank.
Now say ℙ_𝖲 had an infinite level ℙ_𝖲n, which must cover 𝖲ℙ, by <ref>. Take minimal L⊆ℙ with L_𝖲=ℙ_𝖲n, which must be an antichain in ℙ, as L_𝖲 is an antichain in ℙ_𝖲. By <ref>, L can not be a cap, i.e. ℙ∖ L is a selector and hence contains a minimal selector S∉⋃ L_S, contradicting the fact L_S covers 𝖲ℙ. Thus ℙ_𝖲 has finite levels and is thus an ω-poset.
When ℙ is prime and X=p^∈ in (<ref>), the union ⋃ X can be described in simple terms using the common lower bound relation ∧=≥∘≤, i.e. for any p,q∈ℙ,
p∧ q ⇔ p^≥⋒ q^≥.
If ℙ is a prime ω-poset then, for all p∈ℙ,
⋃ p^∈=p^∧.
If ℙ is an ω-poset then every S∈ p^∈ is a filter, by <ref>, and hence S⊆ p^∧, showing that ⋃ p^∈⊆ p^∧. Conversely, if ℙ is prime and p∧ q then, taking any r∈ p^≥∩ q^≥, by assumption we have some S∈ r^∈⊆ p^∈∩ q^∈ and hence q∈⋃ p^∈, showing that p^∧⊆⋃ p^∈.
Incidentally, this result also holds for any cap-basis ℙ of a space X (which again applies to all cap-determined ω-posets, by <ref>). Indeed, in this case
p∧ q ⇔ p⋒ q,
for if p,q⊇ r∈ℙ then p⋒ q, as r≠∅ (see the comments after <ref>). Conversely, if x∈ p∩ q then, as ℙ is a basis, we have r∈ℙ with x∈ r⊆ p∩ q.
<ref> then yields ⋃ p^∈=⋃_x∈ px^∈=p^⋒=p^∧.
Here is another simple observation about ∧ that will soon be useful.
For any p,q∈ℙ and C∈𝖢ℙ,
p∧ q ⇒ ∃ c∈ C (p∧ c∧ q).
If C∈𝖢ℙ then we have B∈𝖡ℙ with B≤ C. If p∧ q then we have r≤ p,q. As B is a band, we then have b∈ B∩ r^≶. If b≤ r then b≤ p,q,b, while if r≤ b then r≤ p,q,b. So in either case p∧ b∧ q and hence p∧ c∧ q, for any c∈ C∩ b^≤.
§.§ Stars
For Hausdorff spectra, stars play a particularly important role. Specifically, as in <cit.>, we denote the star of p∈ℙ in C∈𝖢ℙ by
Cp=C∩ p^∧.
The first thing to observe is the following.
Stars are never empty.
For any p∈ℙ and C∈𝖢ℙ, certainly p∧ p so Cp≠∅, by <ref>.
For any C∈𝖢ℙ, let us define a relation ⊲_C on ℙ by
p⊲_Cq ⇔ Cp≤ q.
Note ⊲_C is also compatible with the ordering, i.e. for all p,p',q,q'∈ℙ,
Compatibilityp≤ p'⊲_Cq'≤ q ⇒ p⊲_Cq.
Indeed, if p≤ p'⊲_Cq'≤ q then Cp⊆ Cp'≤ q'≤ q, i.e. p⊲_Cq. Also
Transitivityp⊲_Cq⊲_Cr ⇒ p⊲_Cr,
as the left side means Cp≤ q and hence Cp⊆ Cq≤ r, thus giving the right side. Also note that refining the cap results in a weaker relation, i.e. for all B,C∈𝖢ℙ,
B≤ C ⇒ ⊲_C ⊆ ⊲_B.
Indeed, if B≤ C and p⊲_Cq then pB≤ pC≤ q, i.e. p⊲_Bq.
The star-below relation is the minimal relation ⊲ on ℙ containing all of these
Star-Below⊲ = ⋃_C∈𝖢ℙ⊲_C,
i.e. p⊲ q means p⊲_Cq, for some C∈𝖢ℙ and hence some B∈𝖡ℙ, by (<ref>). Whenever p⊲_Cq, for some C∈𝖢ℙ, note that we can always replace C with D=C∖ q^≥∪{q}∈𝖢ℙ. In other words, ⊲ could also be defined more explicitly by
p⊲ q ⇔ ∃ C∈𝖢ℙ (Cp={q}).
We again immediately see that ⊲ is compatible with the ordering. As long as ℙ is an ω-poset then it is also transitive – in this case any B,C∈𝖢ℙ has a common refinement D∈𝖢ℙ and so p⊲_Bq⊲_Cr implies p⊲_Dq⊲_Dr, by (<ref>), and hence p⊲_Dr, by (<ref>). We also see that ⊲ ⊆∧ and even
∧∘⊲ ⊆ ∧.
Indeed, if p∧ q⊲ r then we have s≤ p with s≤ q⊲ r and hence s⊲_Cr, for some C∈𝖢ℙ. Then <ref> yields t∈ Cs so p≥ s∧ t≤ r and hence p∧ r.
The significance of ⊲ is that it represents `closed containment' in the spectrum.
If ℙ is an ω-poset then, for all p∈ℙ,
p⊲ q ⇒ cl(p^∈)⊆ q^∈.
The converse also holds when ℙ is also prime.
If p⊲ q then we have C∈𝖢ℙ with Cp≤ q. Take S∈cl(p^∈)=(⋃ p^∈)^⊇, by (<ref>). By <ref>, all minimal selectors are filters so ⋃ p^∈⊆ p^∧ and hence S⊆ p^∧. It follows that ∅≠ S∩ C⊆ Cp≤ q and hence q∈ S^≤=S, i.e. S∈ q^∈. This shows that cl(p^∈)⊆ q^∈.
Now assume ℙ is prime. If p⋪q then Cp≰ q, for every C∈𝖢ℙ, i.e. p^∧∖ q^≥ is a selector. By <ref>, we have a minimal selector S⊆ p^∧∖ q^≥ and hence S∈ p^∧⊇=(⋃ p^∈)^⊇=cl(p^∈), by (<ref>) and (<ref>) (this is where we need ℙ to be prime). Thus S∈cl(p^∈)∖ q^∈ witnesses cl(p^∈)⊈ q^∈.
In particular, p⊲ q implies p^∈⊆ q^∈ and hence p≾ q, by (<ref>), so
ℙ is a cap-determined ω-poset ⇒ ⊲ ⊆ ≤.
However, there are non-cap-determined ω-posets with ⊲ ⊈ ≤. For example, if we take ℙ=-ω then, for any p,q∈ℙ, we see that C={min(p,q)} is a band with Cp=C≤ q and hence p⊲ q, i.e. ⊲=ℙ×ℙ⊈≤.
There is one other situation worth noting, though, when p⊲ q implies p≤ q.
If ℙ is an ω-poset and p∈ℙ is an atom then p^⊲=p^≤.
Take a level L containing p and note pL={p}. Indeed, if l∈ pL then we have q∈ p^≥∩ l^≥ so q=p, as p is an atom, and hence l=p, as distinct elements of L are incomparable. Thus p≤ q implies p⊲_Lq. Conversely, if p⊲ q then we have a band B with p⊲_Bq. Thus we have b∈ B comparable to p and hence p≤ b, as p is an atom. Thus b∈ Bp≤ q and hence p≤ b≤ q.
Sometimes we can, however, replace ⊲ with the stronger relation ⊴ = ⊲∩≤ (where p⊴ q means p⊲ q and p≤ q). For example, let us call R⊆ℙ round if
RoundR⊆ R^⊲,
i.e. R is round if each r∈ R is star-above some q∈ R. Let us also call S⊆ℙ star-prime if it overlaps every star of every element of S, i.e.
Star-Primep∈ S and C∈𝖢ℙ ⇒ S⋒ Cp.
For example, ℙ itself is always star-prime, by <ref>.
If ℙ is an ω-poset and S is star-prime and round then S⊆ S^⊴.
Take any r∈ S. If S is round then we have p,q∈ S and C,D∈𝖢ℙ with p⊲_Cq⊲_Dr. If ℙ is an ω-poset then we have B∈𝖡ℙ refining both C and D. If S is star-prime then we have b∈ Bp∩ S. We then have c∈ C with c≥ b∧ p and hence c∈ Cp≤ q. So b≤ c≤ q⊲ r and hence b⊲ r, by (<ref>). On the other hand, we also have d∈ D with d≥ b≤ q so d∈ Dq≤ r and hence b≤ d≤ r. Thus S∋ b⊴ r, showing that S⊆ S^⊴.
If ℙ is round, we can also improve on (<ref>) as follows.
If ℙ is round then
∧=∧∘⊲.
We already know ∧⊇∧∘⊲, by (<ref>). Conversely, say p∧ q, so we have r∈ p^≥∩ q^≥. If ℙ is round then we have s⊲ r so p,q⊳ s, by (<ref>), and hence p∧ s⊲ q, by (<ref>) again, showing that ∧⊆∧∘⊲.
To say more, we will also need the caps to be `round' in an appropriate sense.
§.§ Regularity
The key condition for Hausdorff spectra is regularity.
We call ℙ regular if every cap is ⊲-refined by another cap, i.e.
Regular𝖢ℙ⊆𝖢ℙ^⊲.
Equivalently, we could use the star-refinement relation ≺ given by
Star-RefinementC≺ D ⇔ C⊲_CD,
i.e. C≺ D means that, for all p∈ C, we have some q∈ D with p⊲_Cq. Note that star-refinement is stronger than ⊴-refinement, i.e. C≺ D implies C⊴ D.
An ω-poset ℙ is regular precisely when every band or cap is star-refined by another band or cap, i.e. 𝖡ℙ⊆𝖡ℙ^≺ or, equivalently, 𝖢ℙ⊆𝖢ℙ^≺.
As star-refinement is stronger than ⊲-refinement, 𝖢ℙ⊆𝖢ℙ^≺ implies that ℙ is regular. Conversely, say ℙ is regular and take D∈𝖢ℙ. By regularity and (<ref>), we have B∈𝖡ℙ with B⊲ D, i.e. for each b∈ B, we have C_b∈𝖢ℙ and d_b∈ D with C_bb≤ d_b. As B is finite and ℙ is an ω-poset, we have A∈𝖡ℙ with A≤ B and A≤ C_b, for all b∈ B. For every p∈ A, we then have b∈ B with p≤ b and hence Ap≤ C_bp⊆ C_bb≤ d_b, showing that A≺ D.
In regular ω-posets, the spectrum consists of round filters. In fact, it suffices to consider L⊆ℙ that are merely linked in that p∧ q, for all p,q∈ L.
Every round linked selector is minimal. If ℙ is an ω-poset,
Every minimal selector is round ⇔ ℙ is regular.
If S is round then, for any s∈ S, we have t∈ S and C∈𝖢ℙ with t⊲_Cs. If S is also linked then S∩ C⊆ Ct so this implies S∩ C≤ s and hence (C∖ S)∪{s} is cap, as it is refined by the cap C. As S∩((C∖ S)∪{s})={s}, if S is also a selector then this shows that it must be minimal.
Now if ℙ is not regular then we have C∈𝖢ℙ∖𝖢ℙ^⊲. This means that C^⊳ does not contain any cap, i.e. ℙ∖ C^⊳ is selector and hence contains some minimal selector S, by <ref>. In particular, we have some c∈ C∩ S and hence s⊲ c, for all s∈ S, showing that S is not round.
Conversely, say ℙ is a regular ω-poset and take a minimal selector S. For any s∈ S, minimality yields C∈𝖢ℙ such that S∩ C={s}. As ℙ is regular, we have D∈𝖢ℙ with D≺ C. As S is a selector, we have d∈ D∩ S. Taking any c∈ C with d⊲_Dc and hence d≤ c, it follows c∈ S so s=c⊳ d, showing that S is round.
Regularity thus means that the spectrum is Hausdorff/regular/metrisable.
If ℙ is an ω-poset then
ℙ is regular ⇒ 𝖲ℙ is Hausdorff.
The converse also holds as long as ℙ is prime.
If ℙ is regular ω-poset then, whenever S∈ p^∈, <ref> yields q∈ S with q⊲ p so S∈ q^∈ and cl(q^∈)⊆ p^∈, by (<ref>). This shows that 𝖲ℙ is a regular space and, in particular, Hausdorff.
Conversely, if ℙ is a prime ω-poset that is not regular then, by <ref>, we have some non-round S∈𝖲ℙ, i.e. we have c∈ S∖ S^⊲ so cl(s^∈)⊈ c^∈, for all s∈ S, by <ref>. This means that S has no closed neighbourhood contained in c^∈, showing 𝖲ℙ is not a regular space. This, in turn, means that 𝖲ℙ is not even Hausdorff, as we already know that 𝖲ℙ is compact, by <ref>.
We can now also characterise minimal selectors in regular ω-posets as follows. In particular, in this case the spectrum consists precisely of maximal round filters, just like those considered in compingent lattices in <cit.> and <cit.>.
If ℙ is a regular ω-poset then
𝖲ℙ ={S⊆ℙ:S is a round linked selector }
={S⊆ℙ:S is a round filter selector }
={S⊆ℙ:S is a maximal round filter}.
By <ref>, every round linked selector is minimal and every minimal selector is round. By <ref>, every minimal selector is also a filter and, in particular, linked. This proves the first two equalities.
For the last, first note that any round filter R containing a selector S must again be a selector and hence a minimal selector, by what we just proved, which implies R=S. This shows that round filter selectors are always maximal among round filters. Conversely, say M is a maximal round filter. If M were not a selector then it would be finite and not contain any atoms of ℙ, by <ref>. As M is a filter, finiteness implies it has a minimum m, but then M=m^≥ would not be maximal, as m is not an atom, a contradiction. Thus M is a selector.
When the space X in <ref> and <ref> is Hausdorff, minor modifications of the proofs allow us to construct the cap-bases so that strict containment implies closed containment, i.e.
p⫋ q ⇒ cl(p)⊆ q.
In terms of the resulting poset, this means <⊆⊲ (and we can likewise modify the proof of <ref> below when ℙ is regular to ensure <⊆⊲ on the subposet ℚ). When <⊆⊲, our spectrum consists precisely of the ultrafilters, i.e. the maximal filters in ℙ. This ultrafilter spectrum is just like that considered for Boolean algebras in the classic Stone duality (see <cit.>) and has also been considered for general posets more recently in <cit.>.
If ℙ is a regular ω-poset with <⊆⊲ then
𝖲ℙ={U⊆ℙ:U is an ultrafilter}.
Assume ℙ is an ω-poset with <⊆⊲. Take an ultrafilter U⊆ℙ. If U has no minimum then it is round because <⊆⊲. If U has a minimum m then this must be an atom, by maximality, in which case m⊲ m so U is again round. So all ultrafilters are round and hence these are precisely the maximal round filters. The result now follows immediately from <ref>.
However, for graded posets, this only happens when ⊲ is reflexive. In this case the spectrum has to be totally disconnected and so this never happens for the continua we are primarily interested in.
If ℙ is a graded ω-poset with <⊆⊲ then ⊲ is reflexive.
Assume ℙ is an ω-poset with <⊆⊲ and take any p∈ℙ. If p is an atom then, in particular, p⊲ p. If p is not an atom then F=p^≥∩ℙ_𝗋(p)+1 is a finite set with p^>=⋃_f∈ Ff^≥. As <⊆⊲, we then have C∈𝖢ℙ with f⊲_Cp, for all f∈ F. Take any q∈ Cp, so q∈ C and we have r≤ p,q. If r=p then q=p because p=r<q would imply q≥ f and, in particular, q∈ Cf≤ p, for any f∈ F, a contradiction. On the other hand, if r<p then r≤ f, for some f∈ F, which implies q∈ Cf≤ p. In either case, q≤ p, showing that Cp≤ p, i.e. p⊲_Cp. As p was arbitrary, this proves that ⊲ is reflexive.
Regularity also yields the following characterisations of prime subsets.
Consider the following statements about some S⊆ℙ.
* S is prime.
* S is star-prime and round.
* S is a round up-set whose atoms are all already atoms in ℙ.
If ℙ is an ω-poset then <ref>⇒<ref>⇒<ref>. If ℙ is also regular then <ref>⇒<ref> as well.
<ref>⇒<ref> If S is round then, for any p∈ S, we have q∈ S and C∈𝖢ℙ with q⊲_Cp. For any t≥ p, this means D=(C∖ Cq)∪{t} is refined by C and is thus also a cap with Dq={t}. If S is also star-prime then t∈ S, showing that S is an up-set. Moreover, if p is not an atom in ℙ then we can choose B∈𝖢ℙ refining C with p∉ B (e.g. take t<p and B=ℙ_n for some n≥𝗋(t) with ℙ_n≤ C). As S is star-prime, we then have r∈ S∩ Bq≤ Cq≤ p. Thus S∋ r<p, showing that p is not an atom in S either.
<ref>⇒<ref> Take any p∈ S. If we have some atom a of ℙ with a⊲ p and hence a≤ p then a^≤ is a minimal selector containing p. Otherwise, assuming S is round and has no extra atoms, we can recursively define a sequence of distinct p_n∈ S with p=p_0 and p_n⊳ p_n+1, for all n∈ω. As long as S is also an up-set, the upwards closure U=⋃_n∈ωp_n^≤ is then a round linked selector. In particular, U is a minimal selector containing p, by <ref>. So S is a union of minimal selectors and thus prime, by <ref>.
<ref>⇒<ref> Now if ℙ is regular then every S∈𝖲ℙ is round and linked, by <ref>. Thus, for every s∈ S and C∈𝖢ℙ, ∅≠ S∩ C⊆ S∩ Cs, i.e. S⋒ Cs. So every minimal selector is round and star-prime and hence the same applies to any union of minimal selectors. By <ref>, these are precisely the prime subsets.
As ℙ itself is always star-prime, the above result implies that, in particular, any round ω-poset is prime and, conversely, any regular prime ω-poset is round. Also, if ⊲ ⊆ < (e.g. if ℙ is a cap-basis of a continuum) then no round subset can contain any atoms, making the last condition in <ref> superfluous, i.e. in this case every round up-set is prime (and conversely if ℙ is also regular).
Lastly, we note linked selectors can be made round by taking the star-up-closure.
If ℙ is regular and S⊆ℙ is a linked selector then S^⊲∈𝖲ℙ.
First note S^⊲ is linked, by (<ref>). To see that S^⊲ is a selector, take any C∈𝖢ℙ. As ℙ is regular, we have B∈𝖢ℙ with B⊲ C. As S is a selector, we have b∈ B∩ S. Then we have c∈ C∩ b^⊲⊆ C∩ S^⊲, as required. To see that S^⊲ is round, take any t∈ S^⊲, so we have s∈ S and C∈𝖢ℙ with s⊲_Ct. As ℙ is regular, we have B∈𝖢ℙ with B⊲ C. As S^⊲ is a selector, we have b∈ B∩ S^⊲. Then we have c∈ C∩ b^⊲⊆ C∩ S^⊲⊲⊆ Cs, again by (<ref>), so b⊲ c≤ t, showing that S^⊲ is indeed round. Thus S^⊲ is a minimal selector, by <ref>.
This gives us the following variant of <ref>, showing that <ref> is one instance of a more general phenomenon where the spectrum of a regular ω-basis is a quotient of the original compactum.
If ℙ⊆𝖯X∖{∅} is a regular ω-basis of a 𝖳_1 space X then
η(x)=x^∈⊲={p⊳ q:x∈ q}
defines a continuous map η:X→𝖲ℙ. If X is compact then η is also a closed surjective map. In this case, η is also injective precisely when ℙ is a cap-basis.
Take any x∈ X and first note x^∈ is linked, as ℙ is a basis. Also any C∈𝖢ℙ covers X, by <ref>,
and hence overlaps x^∈, showing that x^∈ is also a selector. By <ref>, x^∈⊲∈𝖲ℙ, showing that η maps X to 𝖲ℙ. Continuity is then immediate from the fact η^-1[p^∈]=⋃ p^⊳, for all p∈ℙ.
Now assume X is compact. First we claim that, for all p,q∈ℙ,
p⊲ q ⇒ cl(p)⊆ q.
To see this, just note again that any C∈𝖢ℙ covers X, by <ref>, and hence p⊲_Cq implies cl(p)⊆⋃ Cp⊆ q, as ℙ is a basis. By <ref>, any S∈𝖲ℙ is round and so this means ⋂ S=⋂_s∈ Scl(s)≠∅, as X is compact. Taking any x∈⋂ S, it follows that S⊆ x^∈ and hence S⊆ S^⊲⊆ x^∈⊲. Thus S=x^∈⊲, as x^∈⊲ is a minimal selector, showing that η is surjective.
Similarly, we can show that η is a closed map. To see this, take any closed Y⊆ X and any S∈cl(η[Y]). By compactness, ∅=Y∩⋂ S(=Y∩⋂_s∈ Scl(s)) would imply that Y∩⋂ F=∅, for some finite F⊆ S. As S∈cl(η[Y])∩⋂_f∈ Ff^∈, we would then have y∈ Y with η(y)∈⋂_f∈ Ff^∈. But this means F⊆ y^∈⊲⊆ y^∈ and hence y∈ Y∩⋂ F=∅, a contradiction. Thus we must have some y∈ Y∩⋂ S so S⊆ S^⊲⊆ y^∈⊲ and hence S=y^∈⊲∈η[Y], showing that η[Y] is closed.
If ℙ is a cap-basis then x^∈ is already a minimal selector so x^∈⊲=x^∈, for any x∈ X, and hence η is injective, by <ref>. Conversely, if ℙ is not a cap-basis then X has a cover C⊆ℙ which is not a cap. Thus ℙ∖ C is a selector and hence contains a minimal selector S, again with ⋂ S≠∅, by compactness. If we had ⋂ S={x}, for some x∈ X, then x would lie in some c∈ C. But then ⋂ S∖ c=∅ so compactness would yield finite F⊆ S with ⋂ F∖ c=∅. As ℙ is a basis, we would then have s∈ S with x∈ s⊆⋂ F and hence s∖ c=∅, meaning s⊆ c and hence c∈ s^≤⊆ S, contradicting S⊆ℙ∖ C. Thus ⋂ S contains at least two distinct x,y∈ X, necessarily with S⊆ x^∈∩ y^∈ and hence S=η(x)=η(y), showing that η is not injective.
§.§ Subcontinua
Next we examine connected subsets of the spectrum.
While the subset p^∈ coming from a single p∈ℙ may not be connected, subsets of ℙ can still form analogous `clusters'. First let us extend ∧ and its negation ⊥ (i.e. p⊥ q means p^≥∩ q^≥=∅) to subsets A,B⊆ℙ by defining
A∧ B ⇔ ∃ a∈ A ∃ b∈ B (a∧b).
A⊥ B ⇔ ∀ a∈ A ∀ b∈ B (a⊥ b).
We call C⊆ℙ a cluster if
ClusterA≠∅≠ B and A∪ B=C ⇒ A∧ B.
In other words, C is a cluster if it is connected as a subset of the graph with edge relation ∧. So C fails to be a cluster precisely when C has a discrete partition {A,B}, meaning A≠∅≠ B, A⊥ B, A∩ B=∅ and A∪ B=C.
The first thing to observe is that clusters are `upwards closed'. For convenience, here and below we let ⊏_D=⊏|_D=⊏∩(ℙ× D), for any ⊏⊆ℙ×ℙ and D⊆ℙ.
If C⊆ℙ is a cluster and C≤ D then C^≤_D is also a cluster.
Say C^≤_D=A∪ B, where A≠∅≠ B and hence A^≥_C≠∅≠ B^≥_C. If C≤ D then C=A^≥_C∪ B^≥_C. If C is also a cluster then we must have c∈ A^≥_C and d∈ B^≥_C with c∧ d. This means we have a∈ A and b∈ B with a≥ c∧ d≤ b and hence a∧ b, showing that C^≤_D is also a cluster.
Connected subsets of the spectrum yield clusters in all caps.
If ℙ is an ω-poset and X⊆𝖲ℙ is connected then C∩⋃ X is a cluster, for every cap C∈𝖢ℙ.
If C∩⋃ X were not a cluster, for some C∈𝖢ℙ, then it would have a discrete partition {A,B}. As every minimal selector in an ω-poset is a filter, this means Y=⋃_a∈ Aa^∈ and Z=⋃_b∈ Bb^∈ are disjoint non-empty (as p^∈≠∅, for all p∈⋃ X) open subsets covering X, contradicting its connectedness.
Recall from <ref> that any Q⊆ℙ defines a closed subset of the spectrum Q^⊇={S∈𝖲ℙ:S⊆ Q}. As a converse to the above, we can show that if Q∩ C is a cluster, even just coinitially often, then Q^⊇ is connected.
If ℙ is a regular prime ω-poset and Q⊆ℙ is an up-set selector,
{C∈𝖢ℙ:Q∩ C is a cluster} is coinitial in 𝖢ℙ ⇒ Q^⊇ is connected.
If ℙ is a regular ω-poset then 𝖲ℙ is Hausdorff, by <ref>. Thus if Q^⊇ were not connected then we would have A,B⊆ℙ such that the corresponding open sets O=⋃_a∈ Aa^∈ and N=⋃_b∈ Bb^∈ form a disjoint minimal cover of Q^⊇. Assuming ℙ is also prime (and hence a∧ b implies a^∈⋒ b^∈), this means A⊥ B.
If Q is an up-set selector and {C∈𝖢ℙ:Q∩ C is a cluster} is coinitial in 𝖢ℙ, we claim that Q∖(A∪ B) is still a selector. Indeed, this means that any D∈𝖢ℙ is refined by some C∈𝖢ℙ such that Q∩ C is cluster. Then D'=(Q∩ C)^≤_D⊆ Q∩ D is also a cluster so Q∩ D⊆ A∪ B would imply that D' is contained in either A or B. Assume D'⊆ A. Take S∈ N∩ Q^⊇, so we have some b∈ B∩ S. As S is a selector, S⋒ C and hence we also have a∈(S∩ C)^≤_D⊆ D'⊆ A. But then a∧ b, as a,b∈ S, a contradiction. Likewise, we get a contradiction if D'⊆ B, so the only possibility is that, in fact, Q∩ D⊈ A∪ B. As D was an arbitrary cap, this shows that Q∖(A∪ B) is still a selector and hence contains some minimal selector T. But then T∈ Q^⊇∖(O∪ N), again a contradiction. Thus Q^⊇ is connected.
In particular, <ref> and <ref> tell us that if ℙ is a regular prime ω-poset, X⊆𝖲ℙ is closed and 𝒞={C∈𝖢ℙ:⋃ X∩ C is a cluster} then
X is connected ⇔ 𝒞=𝖢ℙ ⇔ 𝒞 is coinitial in 𝖢ℙ.
Hereditarily indecomposable spaces have been a topic of much interest in continuum theory since the discovery of the pseudoarc (see <cit.> and <cit.>). Here we will show how to characterise them in terms of certain `tangled' refinements. These are more in the original spirit of Bing's crooked refinements, in contrast to the crooked covers introduced by Krasinkiewicz and Minc to characterise hereditary indecomposability (see <cit.> and <cit.>).
First recall that a continuum is connected Hausdorff compactum. A continuum is indecomposable if it is not the union of any two proper subcontinua. A topological space is hereditarily indecomposable if every subcontinuum is indecomposable. As long as the space is Hausdorff, this is equivalent to saying that any two subcontinua that overlap are comparable, i.e. one is contained in the other.
This motivates the definition of a `tangled' refinement. Specifically, we call a refinement A⊆ℙ of B⊆ℙ tangled if, for all clusters C,D⊆ A,
C∧ D ⇒ C⊆ D^≤_B≥ or D⊆ C^≤_B≥.
More explicitly, C⊆ D^≤_B≥ means that every c∈ C shares an upper bound in B with some d∈ D, while D⊆ C^≤_B≥ means that every d∈ D shares an upper bound in B with some c∈ C. We denote tangled refinements by ↬, i.e.
A↬ B ⇔ A is a tangled refinement of B.
In particular, A↬ B implies A≤ B.
For any A,A',B,B'⊆ℙ,
A≤ A'↬ B'≤ B ⇒ A↬ B.
Take clusters C,D⊆ A so C^≤_A' and D^≤_A' are then also clusters. If C∧ D then C^≤_A'∧ D^≤_A' and hence, by the definition of ↬, either
C^≤_A'⊆ D^≤_A'≤_B'≥⊆ D^≤≤_B'≥⊆ D^≤_B'≥
or the same with C and D swapped. Without loss of generality, we may assume (<ref>). Then A≤ A' and B'≤ B implies
C⊆ C^≤_A'≥⊆ D^≤_B'≥≥⊆ D^≤_B'≥⊆ D^≤_B'≤_B≥≥⊆ D^≤_B≥.
This shows that A↬ B.
Let us call P∈𝖥ℙ a path if P is a path graph with respect to the relation ∧, which means we have an enumeration {p_1,…,p_n} of P such that
p_j∧ p_k ⇔ |j-k|≤1.
For paths, tangled refinements can be characterised in a similar manner to the crooked refinements from <cit.> used to construct the pseudoarc, as we now show.
Note that any cluster in a path P is also a path and each pair q,r∈ P is contained in a unique minimal cluster/subpath, which we will denote by [q,r]. We further define [q,r)=[q,r]∖{r}, (q,r]=[q,r]∖{q} and (q,r)=[q,r]∖{q,r}.
If P,Q⊆ℙ are paths with P≤ Q then
P↬ Q ⇔ ∀ a,d∈ P ∃ b∈[a,d] ∃ c∈[b,d] ∃ q,r∈ Q (a,c≤ q & b,d≤ r).
⇔ ∀ a,d∈ P ∃ b∈[a,d] ∃ c∈[b,d] ([a,d]⊆[a,b]^≤_Q≥∩[c,d]^≤_Q≥).
Assume the right side of (<ref>) holds. To show that (<ref>) holds, take any a,d∈ P. Then we have g,h∈ Q with [a,d]^≤=[g,h] and we may pick a',d'∈[a,d] with a'≤ g and d'≤ h. If p^≤_Q={g}, for some p∈[a,d] then we may further ensure that a'^≤_Q={g} and, likewise, if p^≤_Q={h}, for some p∈[a,d] then we may further ensure that d'^≤_Q={h}. By (<ref>), we have b∈[a',d'] and c∈[b,d'] with a',c≤ q and b,d'≤ r, for some q,r∈ Q. If a'^≤_Q={g} then q=g and hence [c,d']^≤_Q≥=[g,h]^≥⊇[a,d]. On the other hand, if a'^≤_Q≠{g} then p^≤_Q≠{g} and hence p^≤⋒(g,h], for all p∈[a,d], which again yields [c,d']^≤_Q≥⊇(g,h]^≥⊇[a,d]. Likewise, we see that [a',b]^≤_Q≥⊇[a,d]. Expanding [a',b] and [c,d'] to include a and d then shows that (<ref>) holds.
Now assume (<ref>) holds and take any clusters/subpaths A,D⊆ P with A∧ D. This means A∪ D=[a,d], for some a,d∈ P, so we have b∈[a,d], c∈[b,d] satisfying (<ref>). It follows that A or D must contain [a,b] or [c,d] and hence that D⊆ A^≤_Q≥ or A⊆ D^≤_Q≥, e.g. if [a,b]⊆ A then D⊆[a,d]⊆[a,b]^≤_Q≥⊆ A^≤_Q≥. This shows that P↬ Q.
Finally, assume that P↬ Q and take any a,d∈ P. Then we have b∈[a,d] such that b shares an upper bound in Q with d but no element of [a,b) does. As P↬ Q, this implies [a,b)⊆[b,d]^≤_Q≥ and, in particular, we have some c∈[b,d] sharing an upper bound in Q with a (because a∈[a,b), as long as a≠ b, while if a=b then we can just take c=a too). This shows that the right side of (<ref>) holds.
In the next result it will be convenient to consider a slightly weakening of ↬. Specifically, let us call a refinement A⊆ℙ of B⊆ℙ weakly tangled, denoted A↬_𝗐B, if, for all clusters C,D⊆ A,
C⋒ D ⇒ C⊆ D^≤_B≥ or D⊆ C^≤_B≥.
We call ℙ (weakly) tangled if every cap has a (weakly) tangled refinement, i.e.
(Weakly) TangledC∈𝖢ℙ ⇒ ∃ D∈𝖢ℙ (D↬_(𝗐)C).
We immediately see that A⊲ B↬_𝗐C implies A↬ C so
ℙ is regular and weakly tangled ⇒ ℙ is tangled.
The following result thus tells us that, among prime regular ω-posets, those with hereditarily indecomposable spectra are precisely the tangled posets.
If ℙ is an ω-poset then
ℙ is weakly tangled ⇒ 𝖲ℙ is hereditarily indecomposable.
The converse holds if ℙ is also regular and prime.
Assume 𝖲ℙ is not hereditarily indecomposable, so we have overlapping incomparable subcontinua Y,Z⊆𝖲ℙ. So we can take S∈ Y∖ Z and T∈ Z∖ Y and obtain a minimal open cover of 𝖲ℙ consisting of the sets 𝖲ℙ∖ Y, 𝖲ℙ∖ Z and 𝖲ℙ∖{S,T}. This is refined by some basic cover (c^∈)_c∈ C, necessarily with C∈𝖢ℙ, by <ref>. Now take D∈𝖢ℙ with D≤ C. By <ref>, we have clusters A=D∩⋃ Y and B=D∩⋃ Z, necessarily with A⋒ B, as Y⋒ Z. We also have a∈ D∩ S⊆ A and b∈ D∩ T⊆ B. Taking any c∈ C with a≤ c, we see that S∈ a^∈⊆ c^∈ and so c^∈⊈𝖲ℙ∖ Y and c^∈⊈𝖲ℙ∖{S,T}, the only remaining option then being c^∈⊆𝖲ℙ∖ Z.
But whenever B∋ b'≤ c'∈ C, we see that b'∈⋃ Z, so we have U∈ Z with b'∈ U and hence U∈ b'^∈⊆ c'^∈. Thus implies c'^∈⊈𝖲ℙ∖ Z and hence c'≠ c. Likewise, we see that b has no common upper bound in C with any element of A. This shows that D is not a weakly tangled refinement of C. As D was arbitrary, this shows that ℙ is not a weakly tangled poset, thus proving ⇒.
Conversely, assume ℙ is regular and prime but not weakly tangled, so we have C∈𝖡ℙ such that D↬̸_𝗐 C, for all D∈𝖢ℙ. Take a coinital decreasing sequence (C_n)⊆𝖡ℙ with C_0≤ C. As C_n↬̸_𝗐C, we have overlapping clusters A_n,B_n⊆ C_n such that A_n⊈ B_n^≤_C≥ and B_n⊈ A_n^≤_C≥. Taking a subsequence if necessary, we can obtain clusters A,B⊆ C with A_n^≤_C=A and B_n^≤_C=B, for all n∈ω. Taking further subsequences if necessary, we may assume we have clusters D_m,E_m⊆ C_m with A_n^≤_C_m=D_m and B_n^≤_C_m=E_m whenever m<n. Note D_n⊆ B^≥ would imply
A_n+1⊆ A_n+1^≤_C_n≥=D_n^≥⊆ B^≥≥⊆ B^≥=B_n+1^≤_C≥,
a contradiction. Thus D_n⊈ B^≥ and, likewise, E_n⊈ A^≥, for all n∈ω.
Now note that Q=⋂_m∈ω⋃_n>mA_n^≤ is an up-set such that Q∩ C=A_n^≤_C=A and Q∩ C_n=A_n+1^≤_C_n=D_n, for all n∈ω. It follows that Q^⊇ is connected, by <ref>. Likewise, we have an up-set R=⋂_m∈ω⋃_n>mB_n^≤ such that R∩ C=B and R^⊇ is connected. Also Q'=⋃_n∈ω(D_n∖ B^≥)^≤⊆ Q∖ B is a selector and hence contains a minimal selector in Q^⊇∖ R^⊇, seeing as R∩ C=B. Likewise, R'=⋃_n∈ω(E_n∖ A^≥)^≤⊆ R∖ A contains a minimal selector in R^⊇∖ Q^⊇. Lastly note that ∅≠ D_n∩ E_n⊆ Q∩ R, for all n∈ω, so Q∩ R also contains a minimal selector in Q^⊇∩ R^⊇. Thus Q^⊇ and R^⊇ are incomparable overlapping subcontinua and hence 𝖲ℙ is not hereditarily indecomposable.
§ FUNCTORIALITY
Here we examine order theoretic analogs of continuous maps, using these to obtain a more combinatorial equivalent of the usual category of metrisable compacta.
Throughout this section, fix some posets ℙ, ℚ, ℝ and 𝕊.
For extra clarity, we will sometimes use subscripts to indicate which poset we are referring to, e.g. ≤_ℙ and ≤_ℚ refer to order relations on ℙ and ℚ respectively.
§.§ Continuous Maps
We call ⊐ ⊆ℚ×ℙ a refiner if
Refiner𝖢ℚ⊆𝖢ℙ^⊏,
i.e. if each cap of ℚ is refined by some cap of ℙ.
For example, in this terminology a poset is regular precisely when the star-above relation ⊳ ⊆ℙ×ℙ is a refiner.
We can use refiners to encode continuous maps as follows.
If ℙ is an ω-poset and ϕ:𝖲ℙ→𝖲ℚ is continuous then
q⊐_ϕ p ⇔ ϕ^-1[q^∈]⊇ p^∈
defines a refiner ⊐_ϕ ⊆ℚ×ℙ
such that S^⊏_ϕ = ϕ(S) for every S ∈.
Any C∈𝖢ℚ defines a cover C_𝖲 of 𝖲ℚ, which in turn yields a cover (ϕ^-1[c^∈])_c∈ C of 𝖲ℙ. If ℙ is an ω-poset then ℙ_𝖲 is a basis for 𝖲ℙ, by <ref>, so we have B⊆ℙ such that B_𝖲 refines (ϕ^-1[c^∈])_c∈ C, with respect to inclusion, and hence B refines C, with respect to ⊏_ϕ=⊐_ϕ^-1, i.e. B⊏_ϕ C. By <ref>, B is a cap of ℙ, so this shows that ⊐_ϕ is indeed a refiner.
If q ∈ S^⊏_ϕ then there is some p ∈ S with ϕ^-1[q^∈] ⊇ p^∈∋ S so ϕ(S) ∈ q^∈ and hence q ∈ϕ(S).
On the other hand, if q ∈ϕ(S), i.e. ϕ(S) ∈ q^∈, then by continuity there is some p ∈ S such that ϕ^-1[q^∈] ⊇ p^∈.
Hence q ⊐_ϕ p ∈ S so q ∈ S^⊏_ϕ.
A relation ⊐ ⊆ℙ×ℚ is ∧-preserving if, for all p,p'∈ℙ and q,q'∈ℚ,
∧-Preservationq⊐ p∧ p'⊏ q' ⇒ q∧ q'.
As long as ℙ is prime and ℚ is an ω-poset, the refiner ⊐_ϕ defined in (<ref>) will also be ∧-preserving. Indeed, if ℙ is prime then, for any p,p'∈ℙ and s≤ p,p', we have S∈𝖲ℙ containing s. If q⊐_ϕ p and q'⊐_ϕ p' then q,q'∈ϕ(S) and hence q∧ q', assuming ℚ is an ω-poset, by <ref>.
Conversely, as long as we restrict to regular posets (and hence Hausdorff spectra), we can define continuous maps from ∧-preserving refiners.
If ℙ is an ω-poset and ℚ is a regular poset then any ∧-preserving refiner ⊐ ⊆ℚ×ℙ defines a continuous map ϕ_⊐:𝖲ℙ→𝖲ℚ by
ϕ_⊐(S)=S^⊏⊲.
If ℚ and ℝ are also regular ω-posets and ⊆ℝ×ℚ is another ∧-preserving refiner,
ϕ_∘ϕ_⊐=ϕ_∘⊐.
For any S∈𝖲ℙ, we see that S^⊏ is a linked selector, as ⊐ is a ∧-preserving refiner. Thus S^⊏⊲∈𝖲ℚ, by <ref>, showing
that ϕ_⊐ maps 𝖲ℙ to 𝖲ℚ. For continuity just note that ϕ_⊐^-1[q^∈]=⋃_p⊏∘⊲ qp^∈ is open, for any q∈ℚ.
Next note that the larger subset S^⊏∪ S^⊏⊲ is still linked, because ⊏ is ∧-preserving and ∧ ∘⊲ ⊆∧. If ℚ and ℝ are also regular ω-posets and ⊆ℝ×ℚ is another ∧-preserving refiner then it follows that S^⊏ and S^⊏⊲ are again selectors with linked union. Now, for any C∈𝖢ℝ, we have A,B∈𝖢ℝ with A≺ B⊲ C. We then have a∈ A∩ S^⊏ and a'∈ A∩ S^⊏⊲, necessarily with a∧ a'. We then also have b∈ B and c∈ C with a,a'≤ b⊲ c and hence c∈ S^⊏⊲∩ S^⊏⊲⊲∩ C. This shows that S^⊏⊲∩ S^⊏⊲⊲ is a selector and hence, by the minimality of ϕ_∘⊐(S)=S^⊏⊲ and ϕ_∘ϕ_⊐(S)=S^⊏⊲⊲, it follows that ϕ_∘ϕ_⊐(S)=ϕ_∘⊐(S).
Let 𝐊 denote the category of metrisable compact spaces and continuous maps, and let 𝐏 denote the category of regular prime ω-posets and ∧-preserving refiners (note that these are closed under composition and that 𝕀_ is always a ∧-preserving refiner). We already have a map 𝖲 from objects ℙ∈𝐏 to 𝖲ℙ∈𝐊 and we extend this to morphisms ⊐∈𝐏^ℚ_ℙ(= refiners in ℚ×ℙ) by setting 𝖲(⊐)=ϕ_⊐∈𝐊_𝖲ℙ^𝖲ℚ(= continuous maps from 𝖲ℙ to 𝖲ℚ – in general, for any objects A and B of a category 𝐂, we denote the corresponding hom-set by 𝐂_A^B={m:m is a morphism from A to B}).
The previous results can thus be summarised as follows.
The map 𝖲:𝐏→𝐊 is an essentially surjective full functor.
For every S ∈ we have ϕ_𝕀_(S) = S^⊲ = S since every minimal selector in a regular ω-poset is round, by <ref>, and so 𝖲(𝕀_) = 𝕀_ for every ∈𝐏.
Together with (<ref>), this shows that 𝖲 is a functor.
Moreover, 𝖲 is essentially surjective because every metrizable compactum X is homeomorphic to for some cap-determined ω-poset , by <ref>, which is necessarily prime, by (<ref>), and regular, by <ref>.
The functor is full by <ref> since, for every pair of prime regular ω-posets and and every continuous map ϕ→, we have ϕ = 𝖲(⊐_ϕ) (because ϕ_⊐_ϕ(S)=S^⊏_ϕ⊲=S^⊏_ϕ, as S^⊏_ϕ is already round, by <ref> and <ref>).
The refiner ⊐_ϕ is ∧-preserving since our ω-posets are prime.
We could turn the above result into an equivalence of categories by simply identifying ⊐,∈𝐏^ℚ_ℙ whenever ϕ_⊐=ϕ_. Then 𝖲 then factors as 𝖤∘𝖰, where 𝖤 is an equivalence and 𝖰 is the quotient functor. However, what we would really like is a more combinatorial formulation of the quotient category. We will achieve this in <ref> via a certain category 𝐒 with the same objects as 𝐏 but more restrictive `strong refiners' as morphisms under a modified `star-composition'.
With (<ref>) in mind, one might expect that q⊐ p is equivalent to ϕ_⊐^-1[q^∈]⊇ p^∈. However, both implications may fail, even for ∧-preserving refiners on regular ω-posets. The best we can do at this stage is show two weaker relations are equivalent.
Whenever ⊐∈𝐏^ℚ_ℙ,
q^∧⊇ p^∧⊏⊲ ⇔ cl(q^∈)⊇ϕ_⊐[p^∈].
If q^∧⊇ p^∧⊏⊲ then, for any S∈ p^∈,
ϕ_⊐(S)=S^⊏⊲⊆ p^∧⊏⊲⊆ q^∧=⋃ q^∈,
by (<ref>), and hence ϕ_⊐(S)∈cl(q^∈), by (<ref>). This proves the ⇒ part.
Conversely, say q^∧⊉ p^∧⊏⊲ so we have r∈ p^∧⊏⊲∖ q^∧. Then we have s∈ p^∧∩ r^⊐⊳ and S∈ p^∈∩ s^∈, as ℙ is prime, necessarily with r∈ s^⊏⊲⊆ S^⊏⊲=ϕ_⊐(S). By (<ref>), ϕ_⊐(S)∈cl(q^∈) would then imply r∈ϕ_⊐(S)⊆⋃ q^∈=q^∧, a contradiction. Thus ϕ_⊐(S)∈ϕ_⊐[p^∈]∖cl(q^∈), proving the ⇐ part.
In particular, q⊐ p implies p^∧⊏⊲⊆ q^∧⊲⊆ q^∧ so
q⊐ p ⇒ cl(q^∈)⊇ϕ_⊐[p^∈].
By <ref>, q⊳ r⊐ p then implies q^∈⊇cl(r^∈)⊇cl(ϕ_⊐[p^∈]) and hence ϕ_⊐^-1[q^∈]⊇ϕ_⊐^-1[cl(ϕ_⊐[p^∈])]⊇cl(p^∈), i.e.
q⊳∘⊐ p ⇒ ϕ_⊐^-1[q^∈]⊇cl(p^∈).
Later we will show how to turn this into an equivalence using star-composition.
§.§ Homeomorphisms
By <ref>, isomorphisms in 𝐏 yield homeomorphisms in 𝐊. We can also obtain homeomorphisms of spectra from much more general pairs of refiners, even between non-regular posets.
Let ≿_ℙ⊆ℙ×ℙ denote the restriction of ≿⊆𝖯ℙ×𝖯ℙ to singletons, i.e.
p≿_ℙp' ⇔ {p}≿{p'}.
Likewise define ≿_ℚ⊆ℚ×ℚ and let ≾_ℙ=≿_ℙ^-1 and ≾_ℚ=≿_ℚ^-1. If these have subrelations coming from compositions of a pair of refiners between them then these refiners yield mutually inverse homeomorphisms between their spectra.
If ⊐ ⊆ℚ×ℙ and ⊆ℙ×ℚ are refiners satisfying
⊐∘ ⊆ ≿_ℚ and ∘⊐ ⊆ ≿_ℙ
then S↦ S^⊏ and T↦ T^ are continuous maps between 𝖲ℙ and 𝖲ℚ satisfying
S=S^⊏ and T=T^⊏.
First note S^⊏ is a selector in ℙ whenever S is a selector in ℚ. Indeed, for any D∈𝖢ℚ, we have C∈𝖢ℙ with C⊏ D, as ⊐ is a refiner. As S is a selector, we have c∈ S∩ C. We then have d∈ D with c⊏ d and hence d∈ S^⊏∩ D.
Likewise, any selector T in ℚ gives rise to a selector T^ in ℙ, which in turn yields another selector T^⊏=T^∘⊏⊆ T^≾_ℚ in ℚ. If T is a minimal selector then T^≾_ℚ⊆ T, by (<ref>), and hence T^⊏=T. Moreover, T^ contains some minimal selector S, by <ref>. It follows that S^⊏⊆ T^⊏=T which implies S^⊏=T, by minimality. This in turn implies S=S^⊏=T^, i.e. T^ was already minimal. This shows that S↦ S^⊏ and T↦ T^ are mutually inverse bijections. Lastly, note that the preimage of any subbasic open set q^∈ with respect to the map S↦ S^⊏ is given by ⋃_p⊏ qp^∈, which is again open, showing that S↦ S^⊏ is continuous. Likewise, T↦ T^ is also continuous, as required.
We can also obtain a kind of converse to <ref> by noting that
⊐_ψ∘⊐_ϕ ⊆ ⊐_ψ∘ϕ,
for any ϕ:𝖲ℙ→𝖲ℚ and ψ:𝖲ℚ→𝖲ℝ, as r⊐_ψ q⊐_ϕ p means ψ^-1[r^∈]⊇ q^∈ so
(ψ∘ϕ)^-1[r^∈]=ϕ^-1[ψ^-1[r^∈]]⊇ϕ^-1[q^∈]⊇ p^∈.
In particular, if ℙ and ℚ are ω-posets and ϕ:𝖲ℙ→𝖲ℚ is a homeomorphism then ⊐_ϕ^-1∘⊐_ϕ ⊆ ⊐_id_𝖲ℙ and ⊐_ϕ∘⊐_ϕ^-1 ⊆ ⊐_id_𝖲ℚ. But q⊐_id_𝖲ℙp just means q^∈⊇ p^∈, which is equivalent to q≿ p, by (<ref>). So this shows that
⊐_ϕ^-1∘⊐_ϕ ⊆ ≿_ℙ and ⊐_ϕ∘⊐_ϕ^-1 ⊆ ≿_ℚ.
The following corollary of <ref> shows that subposets of an ω-poset containing infinitely many of its levels all have homeomorphic spectra.
If ℙ is an ω-poset, ℚ⊆ℙ and 𝖯ℚ∩𝖡ℙ is coinitial in 𝖡ℙ then S↦ S∩ℚ and T↦ T^≤ are continuous maps between S∈𝖲ℙ and T∈𝖲ℚ satisfying
(S∩ℚ)^≤=S and (T^≤∩ℚ)=T.
We claim that the caps of ℚ are precisely the caps of ℙ contained in ℚ, i.e.
𝖢ℚ=𝖢ℙ∩𝖯ℚ={C∈𝖢ℙ:C⊆ℚ}.
Indeed, if 𝖢ℙ∋ C⊆ℚ then, as ℚ contains a coinitial subset of 𝖡ℙ, we have some B∈𝖡ℙ∩𝖯ℚ⊆𝖡ℚ refining C and hence C∈𝖢ℚ. Conversely, take some B∈𝖡ℚ. For sufficiently large n∈ω, the cone ℙ^n will contain B and hence the level ℙ_n will be disjoint from B^<. As ℚ contains a coinitial subset of 𝖡ℙ, we have C∈𝖡ℙ∩𝖯ℚ⊆𝖡ℚ refining ℙ_n which is thus also disjoint from B^<. But B is a band of ℚ so this implies that C⊆ B^≥, i.e. C refines B and hence B is also a cap of ℙ. This shows that all bands of ℚ are caps of ℙ and hence the same applies to caps of ℚ as well, proving the claim.
Thus the restrictions ≥_^ and ≥_^ of ≥_ to × and × are refiners satisfying ≥^_∘≥^_⊆≥_⊆≿_ and ≥^_∘≥^_⊆≥_⊆≿_. Noting S^≤^ℚ_ℙ=S∩ℚ and T^≤^ℙ_ℚ=T^≤, for all S ∈ and T ∈, the result now follows from <ref>.
We can also obtain a similar order theoretic analog of <ref>. First, we need the following order theoretic analog of <ref>. Let
ℙ_∅={p∈ℙ:p≾∅}.
Also note D≾_ℙB below is saying that D refines B, with respect to the ≾_ℙ relation on ℙ (which is stronger than just saying D≾ B, for the relation ≾ on 𝖯ℙ).
If ℙ is an ω-poset, B is a cap and C is a finite subset of ℙ∖ℙ_∅ on which ≾_ℙ is just ≤ then there is a minimal cap D≾_ℙB and minimal caps (E_d)_d∈ D with d∈ E_d such that, for all c∈ C and d∈ D,
c≾̸E_d∖{d} ⇒ d≤ c and c≾ d ⇒ c=d and c^≾∈𝖲ℙ,
For all F⊆ C, we will recursively define D_F⊆ B^≿_ℙ∩⋂_f∈ Ff^≥ such that E_F=D'_F∪(C∖ F) is a cap, where D'_F=⋃_G⊆ FD_G is minimal with this property (incidentally, D_F can be empty for many F⊆ C). In particular, D=D'_C is a minimal cap. Also, if d∈ D_F then C∋ c≾̸E_F∖{d}⊇ C∖ F implies c∈ F and hence d∈ D_F≤ c, proving (<ref>) when we take E_d=E_F.
To perform the recursive construction, first note every c∈ C is contained in a minimal cap, as C∩ℙ_∅=∅. As ℙ is an ω-poset, these have a common refinement with B in 𝖢ℙ, which then refines C∪(B^≥∖ C^≾_ℙ). So this must also be a cap and we can then let D_∅ be any minimal subset of B^≥∖ C^≾_ℙ such that C∪ D_∅ is a cap.
Once D_G has been defined, for G⫋ F, note that, for each f∈ F, we have a cap E_F∖{f}=D'_F∖{f}∪{f}∪(C∖ F). Each c∈ C∖ F is also again contained in a minimal cap. These have a common refinement with B in 𝖢ℙ, necessarily refining
E'_F=E”_F∪((B^≥∩⋂_f∈ Ff^≥)∖(C∖ F)^≾_ℙ), where E”_F=⋃_G⫋ FD_G∪(C∖ F).
Thus E'_F is a cap. Now say c^≾_ℙ is a selector, for some c∈ F. If c≾ E”_F then E'_F≾ E”_F so E”_F is also a cap and we may set D_F=∅. On the other hand, if c≾̸E”_F then, in particular, c≾̸∅ so c^≾_ℙ∈𝖲ℙ and c^≾_ℙ⋒(B^≥∩⋂_f∈ Ff^≥), as E'_F is a cap. This means c∈ B^≿_ℙ∩⋂_f∈ Ff^≥, as ≾_ℙ is just ≤ on C, so we may set D_F={c}. Otherwise, f^≾_ℙ is not a selector, for all f∈ F, and hence E'_F has a common refinement in 𝖢ℙ with each complement ℙ∖ f^≾_ℙ which, in turn, must refine E”_F∪(B^≥∩⋂_f∈ Ff^≥∖ C^≾_ℙ). So this last set is a cap and we may let D_F be a minimal subset of (B^≥∩⋂_f∈ Ff^≥)∖ C^≾_ℙ such that E”_F∪ D_F is a cap. As D_F⊆⋂_f∈ Ff^≥ this implies that D'_F=⋃_G⊆ FD_G is minimal such that D'_F∪(C∖ F) is a cap – otherwise we would have d∈ D_G, for some G⫋ F, such that (D'_F∖{d})∪(C∖ F) is a cap refining D'_G∖{d}∪(C∖ G), contradicting the minimality of D'_G.
Above, ≾_ℙ is again just ≤ on C∪ D. Indeed, for any c∈ C and d∈ D, d≾_ℙc implies c≾̸E_d∖{d} (otherwise d≾ E_d∖{d}, contradicting the minimality of E_d) and hence d≤ c. On the other hand c≾_ℙd implies c=d and, in particular, c≤ d.
This yields the following order theoretic analog of <ref>. Essentially it says that, given any ω-poset ℙ, we can always revert to a branching predetermined ω-subposet ℚ without significantly affecting caps or the spectrum (although it is worth noting that, even if ℙ is graded, there is no guarantee ℚ will be graded too).
Every ω-poset ℙ contains a predetermined branching ω-poset ℚ with 𝖢ℚ=𝖢ℙ∩𝖯ℚ such that 𝖲ℙ is homeomorphic to 𝖲ℚ via the maps
S↦ S∩ℚ and T↦ T^≾_ℙ.
Recursively define minimal caps (D_n)_n∈ω and (E_d^n)_d∈ D_n^n∈ω of ℙ as follows. First let D_0 be any minimal cap and set E_0^d=D_0∖{d}, for all d∈ D_0. Once D_k has been defined, use the lemma above to define D_k+1≾_ℙB_k and (E_d^k)_d∈ D_k+1 satisfying (<ref>), where we take C=C_k=⋃_j≤ kD_j and B=ℙ_k. Note that then D_k+1 refines D_k – for any d∈ D_k+1, E_d^k∖{d} is not a cap and so we must have some c∈ D_k(⊆ C_k) with c≾̸E_d^k∖{d} and hence d≤ c, by the first part of (<ref>). As D_k is a minimal cap, it must then also corefine D_k+1. Moreover, as noted above, ≾_ℙ is just ≤ on ℚ=⋃_n∈ωD_n. This and the second part of (<ref>) imply that D_k+1∩ D_k consists only of atoms of ℚ. Thus ℚ is an ω-poset with levels ℚ_n=D_n, for all n∈ω, by <ref>.
By <ref>, every C∈𝖢ℚ is refined by some ℚ_n=D_n∈𝖢ℙ, implying that C∈𝖢ℙ. Conversely, if C∈𝖢ℙ∩𝖯ℚ then, again by <ref>, it is refined by some ℙ_n and hence D_n≾_ℙC. As ≾_ℙ is ≤ on ℚ, it follows that D_n≤ C and hence C∈𝖢ℚ. This shows that 𝖢ℚ=𝖢ℙ∩𝖯ℚ.
It then follows that id_ℚ⊆ℚ×ℚ⊆ℚ×ℙ is a refiner. As ℚ_n≾_ℙℙ_n, for all n∈ω, we have another refiner ⊐=≿_ℙ∩ℙ×ℚ. Moreover, id_ℚ∘⊐⊆≥_ℚ⊆≿_ℚ and ⊐∘id_ℚ=⊐⊆≿_ℙ so <ref> yields mutually inverse homeomorphisms S↦ S^id_ℚ=S∩ Q and T↦ T^⊏=T^≾_ℙ between 𝖲ℙ and 𝖲ℚ.
In particular, (q^∈)_q∈ℚ is a basis of 𝖲ℙ, one which is order isomorphic to ℚ, as ≾_ℙ is just ≤ on ℚ. Thus ℚ is branching, by <ref>. To see that ℚ is also predetermined, say d∈ℚ_n is not an atom in ℚ. Take any q∈ℚ_n+1 such that q≾̸E_n^d∖{d}. Note q≤ c, for some c∈ℚ_n, necessarily with c≾̸E_n^d∖{d} and hence d≤ c, which then implies c=d, as ℚ_n is an antichain. Thus q<d because d is not an atom in ℚ. Likewise, if q<c, for some c∈ℚ_k, necessarily with k≥ n, then d≤ c so q^<=d^≤, showing that ℚ is indeed predetermined.
In order to prove that spectra of regular ω-posets are homeomorphic we can also use a back-and-forth argument analogous to <ref>.
If we have regular ω-posets ℙ and ℚ with coinitial sequences (C_n)⊆𝖢ℙ and (D_n)⊆𝖢ℚ as well as co-∧-preserving surjective
⊏_n ⊆ C_n× D_n and _n ⊆ D_n+1× C_n with _n∘⊏_n ⊆ ≤_ℚ and ⊏_n+1∘_n ⊆ ≤_ℙ, for all n∈ω,
⊏ = ⋃_n∈ω(⊏_n∘⊲_D_n) and = ⋃_n∈ω(_n∘⊲_C_n)
define ∧-preserving refiners ⊐ and such that ϕ_∘ϕ_⊐=id_𝖲ℙ and ϕ_⊐∘ϕ_=id_𝖲ℚ.
As ℚ is regular, ⊐ is a refiner. To see that ⊐ is ∧-preserving, take a∈ C_m and b∈ C_n with a∧ b. If m=n then e∧ f whenever a⊏_me and b⊏_nf, by the assumption that ⊏_m = ⊏_n is ∧-preserving, and hence the same applies whenever a⊏_m∘⊲_C_me and b⊏_n∘⊲_C_nf. Now assume that m>n and take any e,e',f,f' with a⊏_me⊲_D_me' and b⊏_nf⊲_D_nf'. The surjectivity of the given relations then yields c∈ C_n and d∈ D_n satisfying
e_m-1∘⊏_m-1∘_m-2…⊏_n+1∘_nc⊏_nd.
As ⊏_n+1∘_n ⊆ ≤_ℙ, for all n∈ω, it follows that a≤ c and hence b∧ c. As ⊏_n is co-∧-preserving, it follows that f∧ d and hence d≤ f', as f⊲_D_nf'. Also e≤ d, as _n∘⊏_n ⊆ ≤_ℚ, for all n∈ω, so e≤ f' and hence e'∧ f', as e⊲_D_me'. A dual argument applies if m<n, thus showing that ⊐ is indeed ∧-preserving.
Likewise, is ∧-preserving and hence we have continuous maps ϕ_⊐:𝖲ℙ→𝖲ℚ and ϕ_:𝖲ℚ→𝖲ℙ as in (<ref>). To see that ϕ_∘ϕ_⊐=id_𝖲ℙ, take any S∈𝖲ℙ. For any A∈𝖢ℙ, we have B∈𝖢ℙ and n∈ω with C_n≺ B⊲ A. We then also have E∈𝖢ℚ and m>n+1 with D_m≺ E⊲ D_n+1. As S is a selector, we have s∈ S∩ C_m. The surjectivity of all the relations involved then yields a∈ A, b∈ B, c∈ C_n, d∈ D_n+1, e∈ E and f∈ D_m with
s⊏_mf⊲_D_me⊲ d_nc⊲_C_nb⊲ a
This means s⊏ e⊲ d and hence d∈ϕ_⊐(S). Likewise, d b⊲ a and hence a∈ϕ_(ϕ_⊐(S)). Now surjectivity again yields q∈ D_n and p∈ C_n with
f_m-1∘⊏_m-1∘_m-2…⊏_n+1q_np.
As _n∘⊏_n ⊆ ≤_ℚ, for all n∈ω, it follows that f≤ q and hence d∧ q. As _n is co-∧-preserving, this implies c∧ p and hence p≤ b⊲ a. Noting s≤ p, as ⊏_n+1∘_n ⊆ ≤_ℙ, for all n∈ω, it follows that a∈ S^≤⊲=S. We have thus shown that (S∩ϕ_(ϕ_⊐(S)))⋒ A, for all A∈𝖢ℙ, i.e. S∩ϕ_(ϕ_⊐(S)) is a selector. As S and ϕ_(ϕ_⊐(S)) are minimal selectors, this implies S=ϕ_(ϕ_⊐(S)). This shows that ϕ_∘ϕ_⊐=id_𝖲ℙ and a dual argument yields ϕ_⊐∘ϕ_=id_𝖲ℚ.
As an application of <ref>, we can use it to give an alternative proof <ref>, at least in the Hausdorff case, one which gives us more control over the levels of the poset, like in <ref>.
Let the gradification ℙ_𝖦 of an ω-poset ℙ be the disjoint union of its levels, i.e.
ℙ_𝖦=_n∈ωℙ_n=⋃_n∈ωℙ_n×{n}.
To define the order on ℙ_𝖦, we first define the predecessor relation ⋖ by
(p,n)⋖(q,m) ⇔ p≤ q and m⋖ n.
Let ≤^0 be the equality relation on ℙ_𝖦 and recursively define ≤^n+1=≤^n∘⋖=⋖∘≤^n, i.e. ≤^n is just the composition of ⋖ on ℙ_𝖦 with itself n times. Finally let ≤=⋃_n∈ω≤^n on ℙ_𝖦. In particular, the strict order < on ℙ_𝖦 is just the transitive closure of the predecessor relation defined above.
The following result is now immediate from the construction.
If ℙ is an ω-poset then ℙ_𝖦 is an atomless graded ω-poset with
ℙ_𝖦n=ℙ_n×{n}.
Let us call an ω-poset edge-witnessing if common lower bounds of elements in any level are always witnessed on the next, i.e. whenever q,r∈ℙ_n and q∧ r, we have p∈ℙ_n+1 with p≤ q and p≤ r. Likewise, we call an ω-poset star-refining if each level is star-refined by the next, i.e. ℙ_n+1≺ℙ_n, for all n∈ω.
The spectrum 𝖲ℙ of any edge-witnessing star-refining ω-poset ℙ is always homeomorphic to the spectrum of its gradification 𝖲ℙ_𝖦.
For all n∈ω, define ⊏_n⊆ℙ_𝖦n×ℙ_n and _n⊆ℙ_n+1×ℙ_𝖦n by
(p,n)⊏_nq ⇔ p=q.
p_n(q,n) ⇔ p≤ q.
For each n∈ω, we immediately see that ⊏_n and _n are surjective, ⊏_n is co-∧-preserving, _n∘⊏_n⊆≤_ℙ and ⊏_n+1∘_n⊆≤_ℙ_𝖦. As ℙ is edge-witnessing, _n is also co-∧-preserving. As ℙ is star-refining, so is ℙ_𝖦. In particular, both ℙ and ℙ_𝖦 are regular so 𝖲ℙ is homeomorphic to 𝖲ℙ_𝖦, by <ref>.
Let us illustrate the usefulness of the above result with snake-like spaces. First let us call an open cover S a snake if its overlap graph is a path, i.e. if there exists an enumeration s_1,…,s_n of S such that
s_m⋒ s_n ⇔ |m-n|≤1.
We call X snake-like if every open cover is refined by a snake (this is a standard notion in continuum theory, also called chainable as in <cit.>). In particular, every snake-like space is compact because snakes are finite. Also, if X=Y∪ Z for non-empty clopen Y and Z, then any refinement of {Y,Z} can not be a snake, i.e. snake-like spaces are necessarily connected as well.
Every metrisable snake-like X has a graded ω-band-basis whose levels are all snakes.
As X is connected, any minimal subcover of a snake is again a snake. As X is metrisable and snake-like, we thus have a countable collection 𝒞 of minimal snakes which are coinitial w.r.t. refinement among all open covers. By <ref>, we have a subfamily forming the levels of a level-injective ω-cap-basis ℙ. By <ref>, the spectrum 𝖲ℙ is homeomorphic to the original space X. If necessary, we can replace ℙ with a subposet consisting of infinitely many levels which is also edge-witnessing and star-refining. By <ref>, 𝖲ℙ will still be homeomorphic to X, as will 𝖲ℙ_𝖦, by <ref>. As each level of ℙ_𝖦 corresponds to a snake in 𝖲ℙ_𝖦 and hence X, we are done.
§.§ Star-Composition
Let us define the star ⊐^* of any ⊐⊆ℚ×ℙ by
q⊐^*p ⇔ ∃ C∈𝖢ℙ (Cp⊏ q).
For example, the star-above relation is the star of both ≥ and id_ℙ, i.e.
id_ℙ^*=≥^*=⊳.
If ⊐_ϕ is defined by containment relative to some ϕ:𝖲ℙ→𝖲ℚ, as in (<ref>), its star then corresponds to closed containment.
If ℙ is a prime regular ω-poset and ϕ:𝖲ℙ→𝖲ℚ is continuous,
q⊐_ϕ^*p ⇔ ϕ^-1[q^∈]⊇cl(p^∈).
Say q⊐_ϕ^*p, so we have C∈𝖢ℙ with q⊐_ϕ c, for all c∈ Cp, and hence
ϕ^-1[q^∈]⊇(Cp)^∈⊇ p^∧⊇=cl(p^∈),
by (<ref>) and (<ref>) (if S∈ p^∧⊇ then S⊆ p^∧ so we have c∈ C∩ S⊆ Cp and hence S∈(Cp)^∈). This proves the ⇒ part
Conversely, assume ϕ^-1[q^∈]⊇cl(p^∈). As ℙ_𝖲 is a basis for 𝖲ℙ, we have a cover C_S of 𝖲ℙ such that either c^∈⊆ϕ^-1[q^∈] or c^∈⊆𝖲ℙ∖cl(p^∈), for all c∈ C. Thus C∈𝖢ℙ, by <ref>, and c^∈⊆ϕ^-1[q^∈], whenever c∈ Cp. This means q⊐_ϕ c, for all c∈ Cp, so C witnesses q⊐_ϕ^*p.
Another thing we can note immediately about stars is the following.
If ⊐⊆ℚ×ℙ is ∧-preserving then so is ⊐^*.
Say ⊐ is ∧-preserving. If q⊐^*p and q'⊐^*p' then we have C,C'∈𝖢ℙ with Cp⊏ q and C'p'⊏ q'. If p∧ p' then <ref> yields c∈ Cp with c∧ p'. Then <ref> again yields c'∈ C'p' with c∧ c'. Thus q∧ q', as q⊐ c∧ c'⊏ q' and ⊐ is ∧-preserving, showing that ⊐^* is also ∧-preserving.
Also, stars do not change the up-closures of round star-prime subsets.
For any ⊐⊆ℚ×ℙ and S⊆ℙ,
S is round and star-prime ⇒ S^⊏=S^⊐^*.
If S is round then S^⊏=S^⊲⊏⊆ S^⊐^*. On the other hand, if q∈ S^⊐^* then we have s∈ S with q⊐^*s, which means we have C∈𝖢ℙ with Cs⊏ q. If S is star-prime then we have c∈ Cs∩ S⊏ q so q∈ S^⊏, showing that S^⊐^*⊆ S^⊏.
Define the star-composition of any ⊆ℝ×ℚ and ⊐⊆ℚ×ℙ by
*⊐ = (∘⊐)^*.
This more accurately reflects composition of continuous functions, as we now show.
If ℙ, ℚ and ℝ are prime regular ω-posets then, for any continuous maps ϕ:𝖲ℙ→𝖲ℚ and ψ:𝖲ℚ→𝖲ℝ,
⊐_ψ^**⊐_ϕ^*=⊐_ψ*⊐_ϕ=⊐_ψ∘ϕ^*.
By <ref>, ⊐_ψ^*⊆⊐_ψ and ⊐_ϕ^*⊆⊐_ϕ so ⊐_ψ^**⊐_ϕ^*⊆⊐_ψ*⊐_ϕ.
On the other hand, if r⊐_ψ*⊐_ϕ p then we have C∈𝖢ℙ such that, for all c∈ Cp, we have q_c∈ℚ with r⊐_ψ q_c⊐_ϕ c. This means
cl(p^∈)⊆⋃_c∈ Cpc^∈⊆⋃_c∈ Cpϕ^-1[q_c^∈]⊆ϕ^-1[ψ^-1[r^∈]]=(ψ∘ϕ)^-1[r^∈].
By <ref>, this implies r⊐_ψ∘ϕ^*p.
Now say r⊐_ψ∘ϕ^*p, i.e. cl(p^∈)⊆ϕ^-1[ψ^-1[r^∈]]. For each S∈cl(p^∈), the continuity of ψ yields q∈ϕ(S) with cl(q^∈)⊆ψ^-1[r^∈]. The continuity of ϕ then yields c∈ S with cl(c^∈)⊆ϕ^-1[q^∈]. On the other hand, for every S∈𝖲ℙ∖cl(p^∈), we have c∈ S with c⊥ p. As 𝖲ℙ is compact, it has a finite cover consisting of c^∈ for such c. By <ref>, these form a cap, i.e. we have C∈𝖢ℙ such that r⊐_ψ^*∘⊐_ϕ^*c, for all c∈ Cp, showing that r⊐_ψ^**⊐_ϕ^*p.
Also, replacing ∘ with * in (<ref>) turns ⇒ into ⇔.
If ℙ and ℚ are regular prime ω-posets and ⊐⊆ℚ×ℙ is a ∧-preserving refiner then, for all p∈ℙ and q∈ℚ,
q⊳*⊐ p ⇔ ϕ_⊐^-1[q^∈]⊇cl(p^∈).
If q⊳*⊐ p then we have C∈𝖢ℙ with Cp⊏∘⊲ q so (<ref>) and (<ref>) yield
cl(p^∈)⊆(Cp)^∈⊆cl((Cp)^∈)⊆ϕ_⊐^-1[q^∈].
Conversely, if q⊳*⊐ p fails then Cp⊈ q^⊳⊐, for all C∈𝖢ℙ. Put another way, p^∧∖ q^⊳⊐ is a selector and hence contains a minimal selector S. Then S∈cl(p^∈), by (<ref>) and (<ref>), but q∉ S^⊏⊲=ϕ_⊐(S), i.e. ϕ_⊐(S)∉ q^∈ so S∈cl(p^∈)∖ϕ_⊐^-1[q^∈].
Next let us make some simple observations about *. For example,
*⊐ ⊇ ∘⊐^*.
Indeed, if r q⊐^*p then we have C∈𝖢ℙ with Cp⊏ q r so r*⊐p. Thus
⊐∘⊳ ⊆ ⊐*≥ = ⊐^*∘≥ = ⊐^*.
Indeed, the first inclusion is just a special case of (<ref>) where and ⊐ are replaced by ⊐ and ≥ respectively. On the other hand, if q⊐^*p≥ r then we have C∈𝖢ℙ with Cr⊆ Cp≤ q and hence q⊐^*r. This shows that ⊐^*∘≥=⊐^*. Also certainly ⊐^*⊆(⊐∘≥)^*=⊐*≥. Conversely, if q⊐*≥ p then we have C∈𝖢ℙ such that, for all c∈ Cp, we have q_c∈ℙ with c≤ q_c⊏ q. Setting D=(C∖ Cp)∪{q_c:c∈ Cp}, note C≤ D∈𝖢ℙ and Dp⊏ q, i.e. D witnesses q⊐^*p, showing ⊐*≥⊆⊐^* too.
If ℙ is a regular and ⊐⊆ℚ×ℙ is a refiner then so is ⊐^*.
As ℙ is regular, ⊳ is a refiner. As ⊐ is a refiner too, so is ⊐∘⊳ and hence so too is ⊐^*⊇⊐∘⊳, by (<ref>).
Combined with <ref>, this means ⊐^*∈𝐏^ℚ_ℙ whenever ⊐∈𝐏^ℚ_ℙ. Moreover, ϕ_⊐=ϕ_⊐^*, by (<ref>). We can further characterise when ϕ_⊐=ϕ_ as follows.
For any ⊐∈𝐏_ℙ^ℚ,
ϕ_⊐=ϕ_ ⇔ ⊳*⊐=⊳*.
If ϕ_⊐=ϕ_ then (<ref>) yields
q⊳*⊐ p ⇔ ϕ_⊐^-1[q^∈]⊇cl(p^∈) ⇔ ϕ_^-1[q^∈]⊇cl(p^∈) ⇔ q⊳* p.
Conversely, if ⊳*⊐=⊳* then (<ref>) yields
ϕ_⊐(S)=S^⊏⊲=S^⊏∘⊲=S^(⊳∘⊐)^*=S^(⊳∘)^*=S^∘⊲=S^⊲=ϕ_(S).
Here are some further simple combinatorial properties of star-composition.
If ℙ is a regular ω-poset, ⊆ℝ×ℚ and ⊐⊆ℚ×ℙ then
*⊐=*⊐^*=(*⊐)^*.
First we claim that
⊐^**=⊐^*=⊐*⊳.
Indeed, if q⊐^**p then we have C∈𝖢ℙ such that, for all c∈ Cp, q⊐^*c and hence B_cc⊏ q, for some B_c∈𝖢ℙ. Replacing C with a finite subcap if necessary, we can then take A∈𝖢ℙ refining B_c, for all c∈ Cp, as ℙ is an ω-poset. We claim that Ap≤∘⊏ q. Indeed, if a∈ Ap then <ref> yields c∈ Cp with a∧ c. As A≤ B_c, we then have b∈ B_c with c∧ a≤ b. Thus b∈ B_cc⊏ q so a≤ b⊏ q, proving the claim. In particular, A witnesses q⊐*≥ p, showing that ⊐^**⊆⊐*≥=⊐^*.
Conversely, if q⊐^*p then we have C∈𝖢ℙ with Cp⊏ q. Regularity then yields B∈𝖢ℙ with B⊲ C so Bp⊲ Cp⊏ q and hence q⊐∘⊳b, for all b∈ Bp. Thus B witnesses q(⊐∘⊳)^*p, showing that ⊐^*⊆(⊐∘⊳)^*⊆⊐^**, by (<ref>), completing the proof of (<ref>).
In particular, (∘⊐)^**=(∘⊐)^*=(∘⊐∘⊳)^*⊆(∘⊐^*)^*,
by (<ref>). In terms of star-composition, this means that (*⊐)^*=*⊐⊆*⊐^*. But *⊐^*=(∘⊐^*)^*⊆(*⊐)^*, by (<ref>), completing the proof of (<ref>).
For any ⊆ℝ×ℚ and ∧-preserving refiner ⊐⊆ℚ×ℙ,
^*∘⊐⊆*⊐.
If r^*q⊐ p then we have C∈𝖢ℚ with Cq r. As ⊐ is a ∧-preserving refiner, we then have B∈𝖢ℙ with B⊏ C and hence Bp⊏ Cq r. So B witnesses r*⊐ p, showing that ^*∘⊐⊆*⊐.
We can now show that star-composition is associative.
If ℙ is a regular ω-poset, ⊐⊆ℚ×ℙ is a ∧-preserving refiner, ⊆ℝ×ℚ satisfies ⊆^* and ⊆𝕊×ℝ then
*(*⊐) = (∘∘⊐)^* = (*)*⊐.
First note (<ref>) immediately yields
*(*⊐)=*(∘⊐)^*=*(∘⊐)=(∘∘⊐)^*.
Likewise, (<ref>) and <ref> yield
(*)*⊐=((∘)^*∘⊐)^*⊆((∘)*⊐)^*=(∘)*⊐=(∘∘⊐)^*.
Conversely, as ⊆^*, (<ref>) yields
(∘∘⊐)^*⊆(∘^*∘⊐)^*⊆((*)∘⊐)^*=(*)*⊐.
Let us call ⊐∈𝐏_ℙ^ℚ a strong refiner if
Strong Refiner⊐=⊳*⊐.
For any other ∈𝐏_ℝ^ℙ, we immediately see that ⊐*=⊳*⊐*. In particular, strong refiners are closed under star-composition. They are also star-invariant, as
⊐^*=(⊳*⊐)^*=(⊳∘⊐)^**=(⊳∘⊐)^*=⊐.
Moreover, ⊳*⊳=≥^**≥^*=≥*≥=(≥∘≥)^*=≥^*=⊳, showing that ⊳ is also a strong refiner on any ℙ∈𝐏. Furthermore, ⊳*⊐=⊐=⊐^*=⊐*⊳ showing that each ⊳_ℙ is an identity with respect to star-composition. In other words, we have a category 𝐒 with the same objects as 𝐏 (prime regular ω-posets) but with strong refiners as morphisms under star-composition.
In fact, 𝐒 is equivalent to 𝐊, as witnessed by the map 𝖲 from <ref>.
𝖲|_𝐒:𝐒→𝐊 is a fully faithful essentially surjective functor such that 𝖲=𝖲|_𝐒∘𝖰, where 𝖰:𝐏→𝐒 is the functor defined by 𝖰(⊐)=⊳*⊐.
For any ⊐∈𝐏_ℙ^ℚ, (<ref>) yields
⊳*⊐*⊳=(⊳*⊐)^*=(⊳∘⊐)^**=(⊳∘⊐)^*=⊳*⊐.
For any other ∈𝐏_ℚ^ℝ, it follows that
⊳*⊐*⊳*=⊳*⊐*=(⊳∘⊐∘)^*=⊳*(⊐∘).
This shows that 𝖰 defined by 𝖰(⊐)=⊳*⊐ preserves the product. Moreover,
⊳_ℙ*id_ℙ=(⊳_ℙ∘id_ℙ)^*=⊳_ℙ^*=≥_ℙ^**=≥_ℙ^*=⊳_ℙ.
As each ⊳_ℙ is an identity in 𝐒, this shows that 𝖰 is a functor. Also
ϕ_*⊐=ϕ_(∘⊐)^*=ϕ_∘⊐=ϕ_∘ϕ_⊐
and ϕ_⊳_ℙ=id_𝖲ℙ (because S=S^⊲=S^⊲⊲, for all S∈𝖲ℙ), so 𝖲|_𝐒 is a functor too. In particular, this also yields ϕ_⊳*⊐=ϕ_⊳∘ϕ_⊐=ϕ_⊐, showing that 𝖲=𝖲|_𝐒∘𝖰. As 𝖲 is full and essentially surjective, so is 𝖲|_𝐒. By <ref>, 𝖲|_𝐒 is also faithful.
The functor 𝖰 thus replaces any ⊐∈𝐏_ℙ^ℚ with a canonical representative in the same equivalence class defined by 𝖲, namely the unique representative which corresponds exactly to closed containment, by (<ref>). The natural topology on strong refiners thus corresponds exactly to the compact-open/uniform convergence topology. More precisely, the functor 𝖲|_𝐒 is a homeomorphism from each hom-set 𝐒_ℙ^ℚ, considered as a subspace of the power-space 𝖯(ℚ×ℙ) (i.e. with the topology generated by sets of the form {⊐∈𝖲_ℙ^ℚ:q⊐ p}, for p∈ℙ and q∈ℚ), to the hom-set 𝐊_𝖲ℙ^𝖲ℚ with its compact-open/uniform convergence topology. We plan to make use of this in future work on dynamical systems constructed from posets and refiners.
One could also make other choices of representative morphisms. For example, for any ⊐∈𝐏_ℙ^ℚ, we could define ⊒⊆ℚ×ℙ by
q⊒ p ⇔ q^⊐⊇ p^⊳.
Then ⊐↦⊳*⊐ again defines a functor selecting a representative in the equivalence class defined by 𝖲, this time corresponding to mere containment, i.e.
q⊳*⊐p ⇔ ϕ_⊐^-1[q^∈]⊇ p^∈.
However, the natural topology on such refiners will be different and thus less useful when it comes to considering dynamical systems.
alphaurl
|
http://arxiv.org/abs/2307.02600v1
|
20230705185258
|
Active Dynamics of Linear Chains and Rings in Porous Media
|
[
"Ligesh Theeyancheri",
"Subhasish Chaki",
"Tapomoy Bhattacharjee",
"Rajarshi Chakrabarti"
] |
cond-mat.soft
|
[
"cond-mat.soft",
"cond-mat.stat-mech"
] |
APS/123-QED
Department of Chemistry, Indian Institute of Technology Bombay, Mumbai 400076, India
Department of Chemistry, Indian Institute of Technology Bombay, Mumbai 400076, India
Department of Materials Science and Engineering, University of Illinois Urbana-Champaign, Urbana, Illinois 61801, USA
[email protected]
National Centre for Biological Sciences, Tata Institute of Fundamental Research, Bangalore 560065, India
[email protected]
Department of Chemistry, Indian Institute of Technology Bombay, Mumbai 400076, India
To understand the dynamical and conformational properties of deformable active agents in porous media, we computationally investigate the dynamics of linear chains and rings made of active Brownian monomers. In porous media, flexible linear chains and rings always migrate smoothly and undergo activity-induced swelling. However, semiflexible linear chains though navigate smoothly, shrink at lower activities, followed by swelling at higher activities, while semiflexible rings exhibit a contrasting behavior. Semiflexible rings shrink, get trapped at lower activities, and escape at higher activities. This demonstrates how activity and topology interplay and control the structure and dynamics of linear chains and rings in porous media. We envision that our study will shed light on understanding the mode of transport of shape-changing active agents in porous media.
Active Dynamics of Linear Chains and Rings in Porous Media
Rajarshi Chakrabarti
August 1, 2023
==========================================================
§ INTRODUCTION
A class of active agents lives in complex and heterogeneous porous environments like gels, tissues, soils, and sediments <cit.>. For example, the microorganisms like bacteria move through the disordered environment in search for nutrients <cit.>, natural killer cells scan through porous tissues to neutralize diseased cells <cit.>, and biopolymers like motor proteins move through the living cells by ciliary and flagellar motility <cit.>. In addition, bio-engineered polymers have been used in targeted drug delivery. The physiological barriers in the kidneys have a nanoporous structure that regulates the filtration of these drug carriers. These biological swimmers and their artificial analogs, such as bio-synthetic polymers, experience different types of confinements and interactions while navigating through porous environments <cit.>. Depending on the topology, they often switch the modes of migration by deforming their shapes through their natural habitat to efficiently explore the medium for fulfilling their needs <cit.>. Thus the deformability and topology of active agents bring additional complexity, which in turn either facilitates or suppresses their transport in complex media.
In simple liquid media, the inherent nonequilibrium nature of the active particles enables them to exhibit transient superdiffusion followed by a long-time enhanced diffusion <cit.>. However, novel non-equilibrium effects emerge in the conformational and dynamical properties of a chain of interlinked active particles. Flexible chains swell with increasing activity <cit.>. In contrast, semiflexible polymers shrink at low activity and swell at large activity <cit.>. Numerical simulations reported a coil-to-globule-like transition of active polymer chain due to the interplay of activity and thermal fluctuations <cit.>. Over the past few years, experimental and theoretical studies have focused on how the motion of the active agents in disordered media is influenced by the interactions with the obstacles in the neighborhood <cit.>. Recent studies have shown that the micro-confinement of the porous medium dramatically alters the run and tumble motion of rod-shaped bacterial cells to hopping and trapping motion <cit.>. Chopra et. al. experimentally studied the dynamics of non-tumbling E coli bacteria in a square array of micropillars and reported an anomalous size-dependent active transport in the two-dimensional structured porous environments, where shorter cells get trapped while longer ones escape through the channel-like space between the pillars <cit.>. Computer simulations of linear active polymers in the two-dimensional periodic porous medium demonstrated that the stiff chains are able to move almost unhindered through the ordered porous medium, whereas the flexible one gets stuck <cit.>. A recent study predicted that the local geometry determines the optimal path of the active agents by controlling the reorientations in locally dilute and dense regions <cit.>. Moreover, recent studies on circular topology described the structure, dynamics, and emergence of new topological states of the active ring polymers <cit.>. However, less attention has been devoted to active circular polymeric agents in the porous environment, despite their relevance in biological systems i.e, bacterial or mitochondrial DNA in eukaryotic cells, extruded loops in chromatin, and actomyosin rings <cit.>. The dynamics is even more complex and intriguing in porous media, where activity and topology of active agents interplay. Earlier experimental studies reported the role of polymer topology in pharmacokinetics for treating cancer. They found that the ring topology offers better performance than linear polymers due to reduced renal filtration and longer blood circulation time <cit.>. Thus, the topology of the active agents is a crucial factor for modulating their biophysical properties and biological performance <cit.>. However, the physics behind the interplay between activity, confinement, and topology of active agents has not been well understood till date. Hence, a systematic and detailed investigation of the integrated effects of topology and conformations of active agents in complex porous media is of prime importance.
In order to understand the mode of migration adopted by deformable active agents in crowded media, we computationally analyze the dynamical and conformational properties of active linear chains and rings made of active Brownian particles in two-dimensional static porous media. We consider three different types of polymers by changing their spring constant (k_f) and bending rigidity (κ) as flexible, inextensible, and semiflexible linear chains and rings. For both the flexible and inextensible polymers, the bending rigidity is zero. However, the inextensible polymers have a very large spring constant value than the flexible polymers. For semiflexible polymers, we chose a very high bending rigidity value with a spring constant the same as that of flexible. We find that the dynamics of both the linear chains and rings in porous media are enhanced owing to the mutual contribution of activity and the conformational fluctuations. The flexible or inextensible linear chains migrate faster compared to the rings made of the same number of monomers in porous media. In contrast, a reverse trend in dynamics is observed in unconfined space. i.e., linear chains move slowly in comparison to the rings in unconfined space. Semiflexible linear chains also move smoothly through the porous media, while semiflexible rings transiently get trapped in the pore confinements and escape from such traps with increasing activity. Conformational fluctuations of the flexible, inextensible, or semiflexible linear chains and rings exhibit contrasting behavior in the presence of activity. In porous media, flexible linear chains and rings exhibit activity-induced swelling irrespective of the fact that they are topologically distinct. In contrast, inextensible linear chains and rings display activity-induced shrinking. Surprisingly, semiflexible linear chains and rings behave differently in porous media depending on their topology. Semiflexible linear chains exhibit activity-induced swelling, whereas the rings show activity-induced shrinking with increasing activity. Our study reports how the combined effects of activity, confinements, and topological constraints facilitate or control the transport of active linear chains and rings in complex environments.
§ METHOD
§.§ Model and Simulation Details
We model the disordered random porous media by randomly placing M number (M = 1200, 2000, 2500) of static obstacles that are allowed to overlap inside a 2D square box of fixed size 300 σ. The size of the beads forming the porous medium, σ_P ranges from 1 to 5 σ, and the size distribution of these particles falls under a Gaussian distribution with mean ∼ 3 σ. The linear chain is modeled as a sequence of N self-propelled beads of diameter σ connected by N-1 finitely extensible springs (Fig. <ref>a). The ring with the same number of active Brownian particles (monomers) is created by connecting the terminal beads of the linear chain by the same finitely extensible spring (Fig. <ref>b). In our simulation, σ, k_B T, and τ=σ^2 γ/k_B T set the units of length, energy, and time scales, respectively. Where k_B is the Boltzmann constant, T is the temperature, and γ is the friction coefficient. The equation of motion for the linear chains and the rings has the general expression given by the following Langevin equation with an additional term to account for the activity. Here, we consider a high friction limit. Therefore, the dynamics is practically overdamped as the contribution from the inertia term is negligible, and hence we do not write the inertia term in the following equation of motion.
γd r_i/dt = - ∑_j∇ V(r_i-r_j) + f_i(t) + F_a, i(t)
here the drag force, γd r_i/dt is the velocity of i^th bead times the friction coefficient γ, r_i (i = 1, 2, ..., N) is the positions of the monomers of the linear chain or ring, V(r) is the resultant pair potential between i^th and j^th particles accounting for the conservative forces, thermal force f_i(t) is modeled as Gaussian white noise with zero mean and variance <f_i(t^')f_j(t^'')> = 4 γ k_B T δ_ijδ(t^'-t^''), and F_a, i(t) is the active force which drives the system out of equilibrium. F_a, i(t) has the magnitude F_a, acts along the unit vector <cit.> of each i^th monomer, n(θ_i) = (cos θ_i, sin θ_i), where θ_i evolves as d θ_i/dt = √(2D_R) f_i^R, D_R is the rotational diffusion coefficient and f_i^R is the Gaussian random number with a zero mean and unit variance. Hence, the persistence time of the individual monomers is related to the rotational diffusion coefficient, D_R as τ_R = 1/D_R. The activity can also be expressed in terms of a dimensionless quantity, i.e., the Péclet number Pe, which is defined as F_a σ/k_BT. The total potential energy of the linear chains or rings can be written as V(r) = V_FENE + V_BEND + V_WCA consists of bond, bending and excluded volume contributions. The bond stretching is controlled by the FENE potential:
V_FENE(r_ij)= -k_f r_max^2/2ln[1-( r_ij/r_max) ^2 ], r_ij≤ r_max
∞, .
where r_ij is the distance between two neighboring monomers in the linear chain or ring with a maximum extension of r_max = 1.5 σ, and k_f is the spring constant <cit.>. To impose the condition of inextensibility, k_f is set to be very high for the inextensible linear chains and rings (k_f = 1000). The stiffness of the linear chains and rings is implemented through the bending potential,
V_BEND(ϕ_i ) = κ(1-cosϕ_i)
where κ is the bending modulus and ϕ_i is the angle between the bond vectors i and i+1. To account for self-avoidance, a pair of monomers of the linear chains or rings interact via the repulsive Weeks–Chandler–Andersen (WCA) potential <cit.>.
V_WCA(r_ij)=4ϵ_ij[(σ_ij/r_ij)^12-(σ_ij/r_ij)^6]+ϵ_ij, r_ij<2^1/6σ_ij
0, ,
where r_ij is the separation between the interacting particles, ϵ_ij = 1 is the strength of the steric repulsion, and σ_ij = σ_i + σ_j/2 determines the effective interaction diameter, with σ_i(j) being the diameter of the interacting pairs. The static obstacles in the porous media also interact repulsively with the monomers of linear chains or rings. We consider three different cases: flexible (k_f = 30 and κ = 0), inextensible (k_f = 1000 and κ = 0), and semiflexible (k_f = 30 and κ = 1000) linear chains and rings in the porous media.
All the simulations are performed using the Langevin thermostat, and the equation of motion is integrated using the velocity Verlet algorithm in each time step. We initialize the system by randomly placing the linear chains or rings inside the porous media and relaxing the initial configuration for 2 × 10^6 steps. All the production simulations are carried out for 10^9 steps where the integration time step is considered to be 10^-5, and the positions of the monomers are recorded every 100 step. The simulations are carried out using Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) <cit.>, a freely available open-source molecular dynamics package.
§.§ Characterization of Porous Media
We characterize the pore space structure by the average pore size ξ. We place passive tracer particles at different random locations in the media and allow them to diffuse through the polydisperse porous media. The average pore size ξ is calculated from the time-and-ensemble averaged mean square displacements (MSD) of the passive tracer particle in the porous medium (Fig. S1a). At longer time, the tracer gets confined by the obstacles, and the MSD saturates. Then we take the square root of this saturated value and add the tracer particle diameter to get ξ. The values of ξ obtained from different tracer trajectories are binned to construct a histogram from which the ensemble-averaged probability distribution of ξ, P(ξ) is computed (Fig. S1b). To change the average pore size, ξ, we increase the obstacle density by adding particles, keeping the same width of the Gaussian distribution of the sizes of the obstacles. The average pore sizes of these different porous media are either comparable or smaller/larger than the average R_g (R_g^0) of the passive linear chains or rings in unconfined space.
§ RESULTS AND DISCUSSION
We first simulate flexible linear chains and rings of different sizes (N = 20, 50, 80, 100, 150, 200) in unconfined space to validate our model. According to Flory's theory, R_g of a polymer chain in good solvent scales as N^ν, where ν = 0.75 in 2D. We report that the exponent ν for the linear chain and ring are 0.76± 0.008 and 0.74± 0.0072, respectively (Fig. S2). This implies that our model is consistent with the scaling predictions <cit.>.
To quantify the dynamical behavior of the active linear chains and rings, we compute the time-and-ensemble averaged Mean Squared Displacement (MSD) of the center of mass (COM) (r_c) of the linear chains or rings, ⟨Δr_c^2(τ)⟩, and scaling exponent, α(τ)=d log<Δr_c^2(τ)>/d logτ as a function of lag time τ. We consider the chain length as N = 50 and vary the activity, F_a. In the unconfined case, ⟨Δr_c^2(τ)⟩ of the flexible, inextensible, and semiflexible active linear chains and rings display a three-step growth: a short time thermal diffusion, intermediate superdiffusion, and enhanced diffusion at longer times as compared to the overdamped dynamics of the passive case <cit.> (Fig. S3). Polymer topology results the linear chains to move slower than the rings in the unconfined space due to their larger size (Fig. S3).
§.§ Navigation of Linear Chains and Rings in Porous Media: Smooth Migration, Trapping, and Escaping
In porous media, the dynamics is controlled by pore confinements in addition to the activity and polymer topology, which may lead to different behaviors as compared to in unconfined space. We consider active linear chains and rings in porous media with ξ = 6.95. Irrespective of the topologies the flexible or inextensible linear chains and rings navigate smoothly through the pores undergoing conformational fluctuations (Movie S1-S4). Although there exist different regions of motion: intermediate superdiffusion and long-time enhanced diffusion due to activity (Fig. S4), a noticeable difference with the unconfined space is that the linear chains now migrate faster than the rings made of the same number of monomers (Fig. <ref>). This designates the imperative role of polymer topology in crowded media. However, the dynamics differ dramatically for the semiflexible linear chains from the rings in porous media. The semiflexible linear chains move smoothly by preferentially adopting straight/rod-like conformations (Fig. <ref>a and Movie S5) and occupying multiple pores simultaneously (Fig. S5), while the micro-confinements can trap and restrict the motion of the semiflexible rings in the porous media (Fig. <ref>b and Movie S6), due to their stretched circular conformations (Fig. S5). As a result of the trapping, the subdiffusive behavior of the semiflexible ring is more pronounced and persists for a longer time than the semiflexible linear chain at lower activities (Fig. <ref>(e, f)). An increase in activity enhances the conformational fluctuations of semiflexible rings, which facilitate their escape from the pore confinements (Fig. <ref>). Hence, our analyses manifest that the topology of the active agent greatly influences and regulates their transport in the porous media.
§.§ Effect of Porous Architecture on Migration of Active Linear Chains and Rings
Further, to characterize how the local arrangement of the media affects the transport of linear chains and rings, we simulate flexible and semiflexible linear chains and rings in porous media with different average pore sizes (ξ) for a given activity Pe = 60. If the size of the ring is comparable or larger than ξ, the flexible and semiflexible linear chains migrate faster than the rings (Fig. <ref>). A reverse trend in dynamics is observed like in unconfined media if the average size of the ring is much smaller than ξ, but the dynamics becomes comparatively slower in porous media than in the unconfined space. For very small ξ, the semiflexible active linear chains still migrate smoothly, while the semiflexible active rings get trapped indicated by strong subdiffusive behavior, and become diffusive at larger ξ (Fig. <ref> and Fig. S6). There is a larger difference in the dynamics of the semiflexible linear chains and rings as their motion is significantly affected by the pore confinements.
§.§ Conformations of Linear Chains and Rings in Unconfined and Porous Media: Topology Dependent Activity-induced Swelling and Shrinking
Next, to elucidate the physical origin of distinctly different migration modes of linear chains and rings in porous media, we analyze the conformations attained by them in porous media and in the unconfined space. For this purpose, we perform gyration tensor analysis (see Supplementary Material for details) and generate the probability distribution of R_g, P(R_g) for a range of activities (Pe). In unconfined media, the flexible and inextensible linear chains swell with increasing Pe, and hence the most probable R_g shifts to larger R_g values (insets of Fig. <ref>(a, c)). Semiflexible linear chains shrink at lower activity, and the probability density profile broadens further as the linear chains swell with increasing activity (inset of Fig. <ref>e), which is consistent with the previous observations <cit.>. In unconfined space, flexible rings also swell with increasing Pe like the flexible linear chains (inset of Fig. <ref>b), while for inextensible and semiflexible rings even though there is an enhancement in conformational fluctuations, but the most probable R_g remains the same (insets of Fig. <ref>(d, f)). However, the linear chains possess a larger size than the rings made of the same number of monomers (Fig. <ref>). Swelling of the linear chains and rings is caused by activity, which pushes the monomers away, but the higher spring constant of the inextensible and high bending rigidity of semiflexible rings restrict conformational fluctuations. Therefore, their most probable R_g values are almost independent of Pe for the range of activities considered (insets of Fig. <ref>(d, f)).
Active linear chains and rings depict interesting conformational characteristics depending on their structure in porous media, unlike in unconfined space. Flexible linear chains swell with increasing Pe (Fig. <ref>a) with < R_g > smaller compared to the unconfined space due to pore confinements (Fig. <ref>a). Inextensible linear chains shrink up to moderate activities followed by swelling at very high Pe (Fig. <ref>c). On the other hand, semiflexible linear chains shrink at very low activity and then swell in the porous media with increasing Pe (Fig. <ref>e). In porous media, flexible rings also swell with activity like linear ones. Inextensible rings shrink at lower and moderate activities and swell at very high Pe. For the semiflexible rings, swelling ceases completely, and the peaks of P(R_g) shifted to smaller R_g values signifying activity-induced shrinking in porous media (Fig. <ref>(b, d, f) and Fig. <ref>b). In the pore confinement, the rings with higher activity more frequently collide with the obstacles with a larger effective force. This subsequently generates more fluctuations along the inward transverse direction of the contour responsible for the shrinking of rings in the pore confinements <cit.>. However, such inward transverse fluctuations are largely absent for linear chains due to the extended rod-like structure that helps them to escape from the pore confinements. In porous media, flexible linear chains exhibit a size comparable to or larger than the average pore size ( < R_g >/ξ≳ 1 ). In contrast, inextensible active linear chains have a relatively smaller size compared to the average pore size ( < R_g >/ξ < 1 ), while semiflexible linear chains consistently possess a size larger than the average pore size (Fig. <ref>a). Furthermore, passive flexible rings, due to their circular structure, initially have a size smaller than the average pore size ( < R_g >/ξ < 1). However, with increasing activity, these rings swell and eventually reach a size comparable to or larger than the average pore size ( < R_g >/ξ≳ 1 ). Intextensible rings behave similarly to linear chains, as their size is smaller than ξ. As for semiflexible rings, they initially possess a size larger than ξ, but with increasing activity, they shrink, and their size becomes slightly smaller or comparable to the average pore size (Fig. <ref>b). We also plot the R_g autocorrelation functions for the linear chains and rings, which further support the activity-induced swelling and shrinking of linear chains and rings observed in P(R_g) in porous media (Fig. S7). Our analyses unravel the importance of topology in the migration modes of active linear chains and rings in porous media.
To understand how the linear chains and rings deform in the presence of the activity and pore confinements, we compute the shape descriptor asphericity parameter defined as:
A = (λ_2 - λ_1)^2/(λ_1 + λ_2)^2
where λ_1 and λ_2 are the eigenvalues of gyration tensor. < A > ranges between 0 and 1, where 0 corresponds to circular/collapsed, and 1 corresponds to most extended geometry. < A > increases with Pe for the flexible, inextensible, and semiflexible linear chains and rings (Fig. <ref>). However, semiflexible linear chains possess higher values of < A > compared to the flexible linear chains because the semiflexible linear chains display a more extended geometry compared to the flexible ones (Fig. <ref>a). Inextensible linear chains show a shrinking-like behavior at lower activities as reflected by the lower values of < A > (Fig. <ref>a). On the contrary, flexible rings have higher values of < A > compared to the semiflexible ring because the semiflexible rings prefer to be in the circular geometry, and the flexible rings take elongated structure due to swelling (Fig. <ref>b). This further supports the transient trapping observed for the semiflexible rings. The distributions of the asphericity parameter, P(A), manifest that linear chains and rings undergo strong deformation with increasing Pe (Fig. S8). The extent of straightening is more pronounced for linear chains than the rings owing to the structural restrictions.
We compare the dynamical and conformational properties of flexible and semiflexible linear chains with rings of comparable size ( <R_g >), but with the different number of monomers (Fig. S9 and Fig. S10). We find that the qualitative trends remain the same as the flexible or semiflexible active linear chains move faster than the rings, and for the semiflexible case, there is a more pronounced difference between their motion in porous media (Fig. S9). This indicates that apart from the size, polymer topology also plays a crucial role in the porous media. Subsequently, we analyze the conformations of these polymers, which uncover the swelling of flexible linear chains and rings, whereas the semiflexible linear chains behave differently from rings (Fig. S10). Semiflexible linear chains exhibit activity-induced shrinking followed by swelling, while rings with a size comparable to linear chains shrink with activity in porous media (Fig. S10). Therefore, the topology-guided dynamics and conformations of polymers with comparable sizes play a pivotal role in their efficient migration in porous environments.
§ SUMMARY
In summary, we present how the motion of active Brownian linear chains and rings is controlled by pore size, activity, polymer topology, and the nature of the polymers in porous media. The dynamics of COM of the linear chains or rings is enhanced by orders of magnitude with activity and show intermediate superdiffusion, unlike the passive case, irrespective of their topology. Flexible active linear chains and rings migrate smoothly through the pores, and inextensible ones also show a similar trend with increasing activity. On the other hand, the semiflexible linear chains exhibit distinctly different dynamics compared to the semiflexible rings with the same number of monomers in the porous media. Semiflexible linear chains smoothly navigate through the porous media by adopting a straight rod-like structure. In contrast, semiflexible rings display a transition from trapping inside the pore confinements at lower activities due to stretched circular-like conformations to escaping at higher activities facilitated by the enhanced conformational fluctuations. This portrays the effect of topology-facilitated navigation in porous media.
Our results depict that there is a considerable difference in conformational properties of linear chains and rings while exploring the porous media. In porous media, the flexible linear chains and rings both swells with activity. Inextensible linear chains shrink at lower activity and swell at very high activities, while inextensible rings show activity-induced shrinking in porous media. For the semiflexible case, the linear chains shrink at lower activities, followed by swelling at higher activities, but the rings exhibit only shrinking with increasing activity.
The porous architecture also plays a significant role as the diffusion largely depends on the pore size. A larger pore size facilitates the motion of the rings, while a pore size smaller than the ring size accelerates the dynamics of the linear chains compared to the rings. Our findings disclose that pore confinement and the topology of the active agents substantially alter the dynamical and conformational properties of active polymers. The physics underlying the phenomena we report here relies on the transport of active linear chains and rings through planar disordered porous environment driven by the combined effects of confinement and activity. Hence, further studies on the migration of the active linear chains and rings in three-dimensional porous environments are anticipated in the future as the 3D porous architecture becomes more complex, and thus the structure of linear and ring polymers will no longer be identical, which could lead to qualitative changes in their behavior.
§ ACKNOWLEDGMENTS
L.T. thanks UGC for a fellowship. S.C. thanks DST Inspire for a fellowship. R.C. acknowledges SERB for funding (Project No. MTR/2020/000230 under MATRICS scheme). T.B. acknowledges NCBS-TIFR for research funding. We acknowledge the SpaceTime-2 supercomputing facility at IIT Bombay for the computing time.
§ SUPPLEMENTARY MATERIAL
§.§ Gyration Tensor Analysis
To evaluate the shape fluctuations or how the linear chains and rings deform, we calculate the gyration tensor of the conformations defined as,
S = ([ ∑_i (x_i-x_com)^2 ∑_i (x_i-x_com)(y_i-y_com); ∑_i (x_i-x_com)(y_i-y_com) ∑_i (y_i-y_com)^2 ] )
where, x_com and y_com represent the x and y components of the COM position respectively. Further, we compute the eigenvalues λ_1 and λ_2 of the gyration tensor by diagonalizing the matrix S. These eigenvalues are used to define the shape descriptor, asphericity parameter, A = (λ_2 - λ_1)^2/(λ_1 + λ_2)^2. It measures the deviation from the spherical symmetry. The asphericity parameter is 1 for a perfect rod-like or linear topology and 0 for a circular structure.
We extract the radius of gyration,
R_g = [ 1/N∑_i = 1^N (r_i - r_com)^2 ]^1/2
from gyration tensor for each time-step of every simulation after the system reaches the steady state. Here r_com is the center of mass of the linear chains or rings. Then a single trajectory is created by stitching different individual trajectories together. This single trajectory is binned to construct a histogram from which the ensemble-averaged probability distribution P(R_g) is obtained.
Movie Description
* Movie_S1
The motion of flexible active (Pe = 60) linear chain in random porous media (ξ= 6.95). The linear chain undergoes conformational changes while migrating through the pore confinements and results activity-induced swelling in the porous media.
* Movie_S2
The motion of flexible active (Pe = 60) ring in random porous media (ξ= 6.95). The ring undergoes a series of conformational changes while migrating through the pore confinements in the media.
* Movie_S3
The motion of inextensible active (Pe = 60) linear chain in random porous media (ξ= 6.95). The smaller size and linear topology help the linear chain to smoothly move through the pore confinements.
* Movie_S4
The motion of inextensible active (Pe = 60) ring in random porous media (ξ= 6.95). The smaller size of the ring facilitates smooth and faster migration through the pore spaces compared to flexible (Movie_S2) and semiflexible (Movie_S6) rings.
* Movie_S5
The motion of semiflexible active (Pe = 60) linear chain in random porous media (ξ= 6.95). The semiflexible linear chain prefers to be in an extended conformation, and thus it shows activity-induced swelling in the porous media. The extended structure of the linear chain facilitates the movement by occupying multiple pore confinements without getting trapped, which is absent for the ring analogue (Movie_S6).
* Movie_S6
The motion of semiflexible active (Pe = 60) ring in random porous media (ξ= 6.95). The average size of the semiflexible ring is larger than flexible and inextensible rings. The higher bending rigidity of the ring restricts conformational fluctuations, and thus it gets trapped in the pore confinements. The activity-induced shrinking of the ring helps in escaping from the traps and migrating through the porous media.
50
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Ribet and Cossart(2015)]ribet2015bacterial
author author D. Ribet and author P. Cossart, title title How bacterial pathogens
colonize their hosts and invade deeper tissues, @noop journal journal Microbes Infect. volume 17, pages 173 (year 2015)NoStop
[Datta et al.(2016)Datta,
Preska Steinberg, and Ismagilov]datta2016polymers
author author S. S. Datta, author A. Preska Steinberg, and author R. F. Ismagilov, title title Polymers
in the gut compress the colonic mucus hydrogel, @noop journal journal Proc. Natl Acad. Sci. USA volume 113, pages 7041 (year
2016)NoStop
[Wu(2005)]wu2005signaling
author author D. Wu, title title Signaling mechanisms for
regulation of chemotaxis, @noop journal journal Cell Res. volume 15, pages
52 (year 2005)NoStop
[Bhattacharjee et al.(2021)Bhattacharjee, Amchin, Ott, Kratz, and Datta]bhattacharjee2021
author author T. Bhattacharjee, author D. B. Amchin, author J. A. Ott,
author F. Kratz, and author S. S. Datta, title
title Chemotactic migration of bacteria in porous media, @noop journal journal Biophys. J. volume 120, pages 3483 (year
2021)NoStop
[Liu et al.(2021)Liu,
Shankar, Marchetti, and Wu]liu2021viscoelastic
author author S. Liu, author S. Shankar,
author M. C. Marchetti, and author Y. Wu, title title Viscoelastic control of spatiotemporal order in
bacterial active matter, @noop journal journal Nature volume 590, pages
80 (year 2021)NoStop
[Moretta et al.(2002)Moretta, Bottino, Mingari, Biassoni, and Moretta]moretta2002natural
author author A. Moretta, author C. Bottino,
author M. C. Mingari, author R. Biassoni, and author L. Moretta, title
title What is a natural killer cell?, @noop
journal journal Nat. Immunol. volume 3, pages 6 (year
2002)NoStop
[Daher and Rezvani(2018)]daher2018next
author author M. Daher and author K. Rezvani, title title Next generation natural
killer cells for cancer immunotherapy: The promise of genetic
engineering, @noop journal journal
Curr. Opin. Immunol. volume 51, pages
146 (year 2018)NoStop
[Ferlazzo and Carrega(2012)]ferlazzo2012natural
author author G. Ferlazzo and author P. Carrega, title title Natural killer cell
distribution and trafficking in human tissues, @noop journal journal Front. Immunol. volume
3, pages 347 (year 2012)NoStop
[Brangwynne et al.(2008)Brangwynne, Koenderink, MacKintosh, and Weitz]brangwynne2008cytoplasmic
author author C. P. Brangwynne, author G. H. Koenderink, author F. C. MacKintosh, and author D. A. Weitz, title title Cytoplasmic diffusion:
Molecular motors mix it up, @noop journal journal J. Cell Biol. volume
183, pages 583 (year 2008)NoStop
[Lau et al.(2003)Lau,
Hoffman, Davies, Crocker, and Lubensky]lau2003microrheology
author author A. W. C. Lau, author B. D. Hoffman, author A. Davies,
author J. C. Crocker, and author T. C. Lubensky, title title Microrheology, stress fluctuations,
and active behavior of living cells, @noop journal
journal Phys. Rev. Lett. volume 91, pages 198101 (year 2003)NoStop
[Sens(2020)]sens2020stick
author author P. Sens, title title Stick–slip model for
actin-driven cell protrusions, cell polarization, and crawling, @noop
journal journal Proc. Natl. Acad. Sci. USA volume 117, pages 24670 (year 2020)NoStop
[Hess and Vogel(2001)]hess2001molecular
author author H. Hess and author V. Vogel, title title Molecular shuttles based on motor
proteins: Active transport in synthetic environments, @noop journal journal Rev. Mol.
Biotechnol. volume 82, pages 67
(year 2001)NoStop
[Goel and Vogel(2008)]goel2008harnessing
author author A. Goel and author V. Vogel, title title Harnessing biological motors to
engineer systems for nanoscale transport and assembly, @noop
journal journal Nat. Nanotechnol. volume 3, pages 465 (year
2008)NoStop
[Heeremans et al.(2022)Heeremans, Deblais, Bonn, and Woutersen]heeremans2022chromatographic
author author T. Heeremans, author A. Deblais,
author D. Bonn, and author S. Woutersen, title
title Chromatographic separation of active polymer–like worm
mixtures by contour length and activity, @noop journal journal Sci. Adv. volume
8, pages eabj7918 (year 2022)NoStop
[Lohrmann and Holm(2023)]lohrmann2023optimal
author author C. Lohrmann and author C. Holm, title title Optimal motility strategies
for self-propelled agents to explore porous media, @noop
journal journal arXiv preprint arXiv:2302.06709 (year 2023)NoStop
[Goswami et al.(2022)Goswami, Chaki, and Chakrabarti]goswami2022reconfiguration
author author K. Goswami, author S. Chaki, and author R. Chakrabarti, title title Reconfiguration, swelling and tagged
monomer dynamics of a single polymer chain in gaussian and non-gaussian
active baths, @noop journal journal J.
Phys. A: Math. Theor. volume 55, pages
423002 (year 2022)NoStop
[Osmanović and Rabin(2017)]osmanovic2017dynamics
author author D. Osmanović and author Y. Rabin, title title Dynamics of active rouse
chains, @noop journal journal Soft
Matter volume 13, pages 963 (year 2017)NoStop
[Shin et al.(2015)Shin,
Cherstvy, Kim, and Metzler]shin2015facilitation
author author J. Shin, author A. G. Cherstvy,
author W. K. Kim, and author R. Metzler, title
title Facilitation of polymer looping and giant polymer
diffusivity in crowded solutions of active particles, @noop
journal journal New J. Phys. volume 17, pages 113008 (year
2015)NoStop
[Samanta and Chakrabarti(2016)]samanta2016chain
author author N. Samanta and author R. Chakrabarti, title title Chain reconfiguration
in active noise, @noop journal journal
J. Phys. A: Math. Theor. volume 49, pages 195601 (year 2016)NoStop
[Chaki and Chakrabarti(2019)]chaki2019enhanced
author author S. Chaki and author R. Chakrabarti, title title Enhanced diffusion,
swelling, and slow reconfiguration of a single chain in non-gaussian active
bath, @noop journal journal J. Chem.
Phys. volume 150, pages 094902
(year 2019)NoStop
[Eisenstecken et al.(2016)Eisenstecken, Gompper, and Winkler]eisenstecken2016conformational
author author T. Eisenstecken, author G. Gompper, and author R. G. Winkler, title title Conformational properties
of active semiflexible polymers, @noop journal
journal Polymers volume 8, pages 304 (year 2016)NoStop
[Bianco et al.(2018)Bianco,
Locatelli, and Malgaretti]bianco2018globulelike
author author V. Bianco, author E. Locatelli, and author P. Malgaretti, title title Globulelike conformation and enhanced
diffusion of active polymers, @noop journal
journal Phys. Rev. Lett. volume 121, pages 217802 (year 2018)NoStop
[Volpe et al.(2011)Volpe,
Buttinoni, Vogt, Kümmerer, and Bechinger]volpe2011microswimmers
author author G. Volpe, author I. Buttinoni,
author D. Vogt, author
H.-J. Kümmerer, and author
C. Bechinger, title title Microswimmers in patterned environments, @noop journal journal Soft Matter volume
7, pages 8810 (year 2011)NoStop
[Theeyancheri et al.(2022a)Theeyancheri, Chaki, Bhattacharjee, and Chakrabarti]theeyancheri2022migration
author author L. Theeyancheri, author S. Chaki,
author T. Bhattacharjee, and author R. Chakrabarti, title title Migration of active rings in porous
media, @noop journal journal Phys.
Rev. E volume 106, pages 014504
(year 2022a)NoStop
[Chopra et al.(2022)Chopra,
Quint, Gopinathan, and Liu]chopra2022geometric
author author P. Chopra, author D. Quint,
author A. Gopinathan, and author B. Liu, title title Geometric effects induce anomalous size-dependent
active transport in structured environments, @noop journal journal Phys. Rev. Fluids volume 7, pages L071101 (year
2022)NoStop
[Mokhtari and Zippelius(2019)]mokhtari2019dynamics
author author Z. Mokhtari and author A. Zippelius, title title Dynamics of active
filaments in porous media, @noop journal journal Phys. Rev. Lett. volume 123, pages 028001 (year 2019)NoStop
[Irani et al.(2022)Irani,
Mokhtari, and Zippelius]irani2022dynamics
author author E. Irani, author Z. Mokhtari, and author A. Zippelius, title title Dynamics of bacteria scanning a porous
environment, @noop journal journal
Phys. Rev. Lett. volume 128, pages
144501 (year 2022)NoStop
[Bhattacharjee and Angelini(2018)]bhattacharjee20183d
author author T. Bhattacharjee and author T. E. Angelini, title title 3D
T cell motility in jammed microgels, @noop journal journal J. Phys. D: Appl. Phys. volume 52, pages 024006 (year
2018)NoStop
[Kurzthaler et al.(2021)Kurzthaler, Mandal, Bhattacharjee,
Löwen, Datta, and Stone]kurzthaler2021geometric
author author C. Kurzthaler, author S. Mandal,
author T. Bhattacharjee, author H. Löwen, author
S. S. Datta, and author
H. A. Stone, title title A geometric criterion for the optimal spreading of active polymers
in porous media, @noop journal journal
Nat. Commun. volume 12, pages 7088
(year 2021)NoStop
[Moore et al.(2023)Moore,
Russo, Liverpool, and Royall]moore2023active
author author F. J. Moore, author J. Russo,
author T. B. Liverpool, and author C. P. Royall, title title Active Brownian particles in random
and porous environments, @noop journal journal J. Chem. Phys. volume 158, pages 104907 (year 2023)NoStop
[Theeyancheri et al.(2022b)Theeyancheri, Sahoo, Kumar, and Chakrabarti]theeyancheri2022silico
author author L. Theeyancheri, author R. Sahoo,
author P. Kumar, and author R. Chakrabarti, title title In silico studies of active probe dynamics in
crowded media, @noop journal journal
ACS Omega volume 7, pages 33637
(year 2022b)NoStop
[Bhattacharjee and Datta(2019)]bhattacharjee2019bacterial
author author T. Bhattacharjee and author S. S. Datta, title title Bacterial hopping and
trapping in porous media, @noop journal journal Nat. Commun. volume 10, pages 2075 (year 2019)NoStop
[Mousavi et al.(2019)Mousavi, Gompper, and Winkler]mousavi2019active
author author S. M. Mousavi, author G. Gompper, and author R. G. Winkler, title title Active Brownian ring
polymers, @noop journal journal J.
Chem. Phys. volume 150, pages 064913
(year 2019)NoStop
[Philipps et al.(2022)Philipps, Gompper, and Winkler]philipps2022dynamics
author author C. A. Philipps, author G. Gompper, and author R. G. Winkler, title title Dynamics of active polar ring
polymers, @noop journal journal Phys.
Rev. E volume 105, pages L062501
(year 2022)NoStop
[Chubak et al.(2020)Chubak,
Likos, Kremer, and Smrek]chubak2020emergence
author author I. Chubak, author C. N. Likos,
author K. Kremer, and author J. Smrek, title
title Emergence of active topological glass through directed
chain dynamics and nonequilibrium phase segregation, @noop
journal journal Phys. Rev. Res. volume 2, pages 043249 (year
2020)NoStop
[Goloborodko et al.(2016)Goloborodko, Marko, and Mirny]goloborodko2016chromosome
author author A. Goloborodko, author J. F. Marko, and author L. A. Mirny, title title Chromosome compaction by
active loop extrusion, @noop journal journal Biophys. J. volume 110, pages 2162 (year 2016)NoStop
[Sehring et al.(2015)Sehring, Recho, Denker, Kourakis, Mathiesen, Hannezo, Dong, and Jiang]sehring2015assembly
author author I. M. Sehring, author P. Recho,
author E. Denker, author M. Kourakis, author
B. Mathiesen, author
E. Hannezo, author B. Dong, and author D. Jiang, title title Assembly
and positioning of actomyosin rings by contractility and planar cell
polarity, @noop journal journal
eLife volume 4, pages e09206
(year 2015)NoStop
[Gupta et al.(2021)Gupta,
Patteson, and Schwarz]gupta2021role
author author S. Gupta, author A. E. Patteson, and author J. M. Schwarz, title title The role of
vimentin-nuclear interactions in persistent cell motility through confined
spaces, @noop journal journal New J.
Phys. volume 23, pages 093042
(year 2021)NoStop
[Fox et al.(2009)Fox,
Szoka, and Fréchet]fox2009soluble
author author M. E. Fox, author F. C. Szoka, and author J. M. J. Fréchet, title title Soluble polymer
carriers for the treatment of cancer: The importance of molecular
architecture, @noop journal journal
Acc. Chem. Res. volume 42, pages
1141 (year 2009)NoStop
[Chen et al.(2009)Chen,
Jerger, Fréchet, and Szoka Jr]chen2009influence
author author B. Chen, author K. Jerger,
author J. M. J. Fréchet, and author F. C. Szoka Jr, title title The influence of polymer topology on
pharmacokinetics: Differences between cyclic and linear pegylated
poly (acrylic acid) comb polymers, @noop journal
journal J. Control. Release volume
140, pages 203 (year 2009)NoStop
[Scholz et al.(2014)Scholz,
Kos, and Wagner]scholz2014comb
author author C. Scholz, author P. Kos, and author E. Wagner, title title Comb-like oligoaminoethane carriers:
Change in topology improves pDNA delivery, @noop journal journal Bioconjugate
Chem. volume 25, pages 251 (year 2014)NoStop
[Nasongkla et al.(2009)Nasongkla, Chen, Macaraeg, Fox, Fréchet, and Szoka]nasongkla2009dependence
author author N. Nasongkla, author B. Chen,
author N. Macaraeg, author M. E. Fox, author
J. M. J. Fréchet, and author F. C. Szoka, title
title Dependence of pharmacokinetics and biodistribution on
polymer architecture: Effect of cyclic versus linear polymers, @noop journal journal J. Am. Chem.
Soc. volume 131, pages 3842 (year 2009)NoStop
[Kwok and Hart(2011)]kwok2011comparative
author author A. Kwok and author S. L. Hart, title title Comparative structural and
functional studies of nanoparticle formulations for DNA and
siRNA delivery, @noop journal journal Nanomedicine volume 7, pages
210 (year 2011)NoStop
[Bechinger et al.(2016)Bechinger, Di Leonardo, Löwen,
Reichhardt, Volpe, and Volpe]bechinger2016active
author author C. Bechinger, author R. Di Leonardo, author H. Löwen, author C. Reichhardt, author G. Volpe, and author G. Volpe, title title Active particles in complex and crowded
environments, @noop journal journal
Rev. Mod. Phys. volume 88, pages
045006 (year 2016)NoStop
[Kremer and Grest(1990)]kremer1990dynamics
author author K. Kremer and author G. S. Grest, title title Dynamics of entangled
linear polymer melts: A molecular-dynamics simulation, @noop journal journal J. Chem. Phys. volume 92, pages 5057 (year
1990)NoStop
[Weeks et al.(1971)Weeks,
Chandler, and Andersen]weeks1971role
author author J. D. Weeks, author D. Chandler, and author H. C. Andersen, title title Role of repulsive forces in
determining the equilibrium structure of simple liquids, @noop
journal journal J. Chem. Phys. volume 54, pages 5237 (year
1971)NoStop
[Plimpton(1995)]plimpton1995fast
author author S. Plimpton, title title Fast parallel algorithms
for short-range molecular dynamics, @noop journal
journal J. Comput. Phys. volume 117, pages 1 (year 1995)NoStop
[Rubinstein and Colby(2003)]rubinstein2003
author author M. Rubinstein and author R. H. Colby, @noop title Polymer Physics, Vol. volume 23 (publisher Oxford University Press:
New York, year 2003)NoStop
[Wu et al.(2014)Wu,
Giri, Sun, and Wirtz]wu2014three
author author P.-H. Wu, author A. Giri, author S. X. Sun, and author
D. Wirtz, title title Three-dimensional cell migration does not follow a random walk, @noop journal journal Proc. Natl. Acad.
Sci. USA volume 111, pages 3949
(year 2014)NoStop
[Eisenstecken et al.(2017)Eisenstecken, Gompper, and Winkler]eisenstecken2017internal
author author T. Eisenstecken, author G. Gompper, and author R. G. Winkler, title title Internal dynamics of
semiflexible polymers with active noise, @noop journal journal J. Chem. Phys. volume
146, pages 154903 (year 2017)NoStop
|
http://arxiv.org/abs/2307.01435v1
|
20230704020703
|
A tangential and penalty-free finite element method for the surface Stokes problem
|
[
"Alan Demlow",
"Michael Neilan"
] |
math.NA
|
[
"math.NA",
"cs.NA",
"65N12, 65N15, 65N30"
] |
arrows,shapes
arrows, decorations.markings,fit
calc,3d
positioning, quotes
>=stealth',
punkt/.style=
rectangle,
rounded corners,
draw=black, very thick,
text width=6.5em,
minimum height=2em,
text centered,
pil/.style=
->,
thick,
shorten <=2pt,
shorten >=2pt,
letterpaper,
left= 1.2in,
right= 1.2in,
top= 1.25in,
bottom= 1.25in
1.
equationsection
[3]
theoremTheorem[section]
lemma[theorem]Lemma
assumption[theorem]Assumption
conjecture[theorem]Conjecture
corollary[theorem]Corollary
proposition[theorem]Proposition
definition
definition[theorem]Definition
remark
remark[theorem]Remark
(@breve@breve#1#2@#1#2@th##0.08em#10.8@#1#2#1#2@th@#1(#2@[origin=c]90(FEM for surface Stokes]A tangential and penalty-free finite element
method for the surface Stokes problem.The first author was partially supported by NSF grant DMS-2012326. The second author
was partially supported by NSF grants DMS-2011733 and DMS-2309425A. Demlow and M. Neilan]
Alan DemlowDepartment of Mathematics, Texas A&M University, College Station, TX, [email protected] NeilanDepartment of Mathematics, University of Pittsburgh, Pittsburgh, PA [email protected][2000]65N12, 65N15, 65N30[
[
August 1, 2023
==================
Surface Stokes and Navier-Stokes equations are used to model fluid flow on surfaces. They have attracted
significant recent attention in the numerical analysis literature because approximation of their solutions poses significant challenges
not encountered in the Euclidean context. One challenge comes from the need to simultaneously enforce tangentiality and
H^1 conformity (continuity) of discrete vector fields used to approximate solutions in the velocity-pressure formulation.
Existing methods in the literature all enforce one of these two constraints weakly either by penalization or by use of Lagrange multipliers.
Missing so far is a robust and systematic construction of surface Stokes finite element spaces which employ
nodal degrees of freedom, including MINI, Taylor-Hood, Scott-Vogelius, and other composite elements which can lead to
divergence-conforming or pressure-robust discretizations. In this paper we construct surface MINI spaces whose velocity fields are tangential.
They are not H^1-conforming, but do lie in H( div) and do not require penalization
to achieve optimal convergence rates. We prove stability and optimal-order energy-norm convergence of the method and demonstrate optimal-orderconvergence of the velocity field in L_2 via numerical experiments. The core advance in the paper is the
construction of nodal degrees of freedom for the velocity field. This technique also may be used to construct surface counterparts to
many other standard Euclidean Stokes spaces, and we accordingly present numerical experiments indicating optimal-order convergence
of nonconforming tangential surface Taylor-Hood ℙ^2-ℙ^1 elements.
myheadingsplain
§ INTRODUCTION
In this paper, we consider the surface Stokes problem:
- div_γ Def_γ + _γ p + = f on γ,
div_γ = 0 on γ.
Here, γ⊂^3
is a smooth and connected two-dimensional surface with outward unit
normal ,
= I-⊗ is the orthogonal projection
onto the tangent space of γ,
and _γ and div_γ are the surface
gradient and surface divergence operators, respectively.
Furthermore,
Def_γ is the tangential deformation operator,
and the forcing function f is assumed to be tangential
to the surface to ensure well-posedness. Further assumptions
and notation are given in Section <ref>;
cf. <cit.> for derivation of the surface Stokes problem and related models and further discussion of their properties.
The system of equations (<ref>) is subject to the
tangential velocity constraint · = 0.
To address degeneracies related to Killing fields, i.e., non-trivial tangential vector fields in
the kernel of Def_γ, we include a zeroth-order mass term in
the momentum equations (<ref>) (cf. Remark <ref>).
We consider surface finite element methods (SFEMs),
a natural methodology mimicking the variational formulation and built upon classical
Galerkin principles. In this approach
the domain γ is approximated by a polyhedral (or higher-order) surface Γ_h
whose faces constitute the finite element mesh.
Similar to the Euclidean setting,
SFEMs for the surface Stokes problem based on the standard velocity-pressure formulation
must use compatible discrete spaces.
Specifically, a discrete inf-sup condition must be satisfied.
Given that SFEMs
utilize the same framework as their Euclidean counterparts,
employing mappings via affine or polynomial diffeomorphisms,
one may anticipate that numerous
classical inf-sup stable Stokes pairs
can be adapted to their surface analogues,
readily enabling the construction of stable SFEMs for (<ref>).
However, the tangential velocity constraint poses a significant hurdle
to constructing stable and convergent SFEMs. As Γ_h is merely Lipschitz continuous,
its outward unit normal is discontinuous at mesh edges and vertices.
As a result, the tangential projection
of continuous, piecewise smooth functions
does not lead to ^1-conforming functions. Moreover,
there do not exist canonical, degree-of-freedom-preserving pullbacks for
tangential ^1 vector fields, in particular, the Piola
transform preserves tangentiality and in-plane normal continuity, but not in-plane tangential continuity.
Finally, a continuous, tangential, and piecewise smooth vector field
on Γ_h must necessarily vanish on mesh corners except in exceptional cases where all incident triangles are coplanar.
Indeed, at a mesh corner there are at least three
faces emanating from a common vertex, whose outward unit normal vectors
span ℝ^3. Therefore tangentiality of a continuous vector field
with respect to each of the three planes implies that it vanishes at the vertex.
Thus any piecewise polynomial space simultaneously satisfying both
tangentiality and continuity exhibits a locking-type phenomenon with
poor approximation properties.
There is a substantial recent literature on numerical approximation of the surface Stokes and related
problems such as the surface vector Laplace equation. Most of these circumvent the difficulties described above in one of three ways: by relaxing the pointwise tangential constraint, by relaxing ^1-conformity of the finite element space, or by using a different formulation of the surface Stokes problem.
For the former, one can weakly impose the tangential constraint via penalization or
Lagrange multipliers <cit.>.
In principle, this allows one to use inf-sup stable Euclidean Stokes pairs to solve
the analogous surface problem. However, this methodology
requires superfluous degrees of freedom, as the velocity space is approximated by arbitrary vectors in
ℝ^3 rather than tangential vectors. In addition an unnatural high-order geometric approximation of the unit normal
of the true surface is needed to obtain optimal-order approximations. Therefore for problems
in which full information of the exact surface is unknown (e.g., free-boundary problem), these penalization schemes
lead to SFEMs with sub-optimal convergence properties.
Alternatively, one may relax ^1-conformity and use finite element trial and test
functions that are not continuous on the discrete surface Γ_h.
In this direction, SFEMs utilizing tangentially- and ( div)-conforming finite element spaces
such as Raviart-Thomas and Brezzi-Douglas-Marini combined
with discontinuous Galerkin techniques are proposed and analyzed in <cit.>; cf. <cit.> for similar methods for Euclidean Stokes equations.
Here, additional consistency, symmetry, and stability terms are added to the method. These terms add some complexity to the
implementation, especially for higher-order surface approximations, but are standard in the context of discontinuous Galerkin methods.
Optimal-order convergence is observed experimentally for a standard SFEM formulation that does not require higher-order approximations of any geometric information.
Discretizations of stream function formulations of the surface Stokes equations have also appeared in the literature <cit.>. However, as with methods weakly enforcing tangentiality, they require higher-order approximation to the surface normal and in addition require computation of curvature information which can in and of itself be a challenging problem.
These methods are also restricted to simply connected surfaces. As a final note, trace SFEMs, in which discretizations of surface PDE are formulated with respect to a background 3D mesh and a corresponding 3D finite element space, are especially important in the context of dynamic surface fluid computations. Trace formulations are well-developed for H^1 conforming/tangentially nonconforming methods and stream function formulations, but have not yet appeared for ( div)-conforming methods.
In this paper, we design a SFEM for the surface Stokes problem (<ref>)
using a strongly tangential finite element space that is based
on a conforming, inf-sup stable Euclidean pair.
The method is based on the standard variational formulation for the Stokes problem
and does not require additional consistency terms or extrinsic penalization.
As far as we are aware, this is the first SFEM for the surface Stokes problem
with these properties.
The key issue that we address is the assignment of degrees of freedom (DOFs)
of tangential vector fields at Lagrange nodes, in particular,
at vertices of the surface triangulation.
To expand on this last point and to describe our proposed approach,
consider a vertex/DOF, call a, of the triangulation of the discrete geometry
approximation Γ_h, and let _a denote the set of faces
in the triangulation that have a as a vertex.
We wish to interpret and define the values
of tangential vector fields forming our finite element space at this vertex
in a way that ensures the resulting discrete spaces
have desirable approximation and weak-continuity properties.
As the mesh elements in _h generally
lie in different planes, it is immediate that such vector fields
are generally multi-valued at a.
Let =_γ denote the closest point projection onto γ,
and note that, because γ is smooth, continuous and tangential vector fields
are well-defined and single-valued at (a). Thus,
as the Piola transform preserves tangentiality,
a natural assignment is to construct finite element functions
with the property |_K(a) = _^-1|_(K) ∀ K∈_a
for some vector field tangent at (a), where _^-1 is the Piola transform
of the inverse mapping ^-1:γ→Γ_h; see Figure <ref>.
Imposing this condition on Lagrange finite element DOFs likely
leads to the sought out approximation and weak-continuity properties,
and thus, conceptually may lead to convergent SFEMs for (<ref>).
However, the implementation of the resulting finite element method
requires explicit information about the exact surface γ and its closest
point projection. Therefore this construction is of little practical value.
Instead of this idealized construction,
we fix an arbitrary face K_a∈_a.
Given the value |_K_a(a) and K∈_a, we then assign
|_K(a) = _^-1_K_a|_K_a(a), the Piola transform of |_K_a(a)
with respect to the inverse of the closest point projection onto the plane containing K_a; see Figure <ref>.
This transform is linear with a relatively simple formula (cf. Definition <ref>), and
it only uses geometric information from Γ_h. Moreover,
we show that this construction is only an O(h^2) perturbation from the idealized setting.
As a result, the constructed finite element spaces possess sufficient weak continuity properties
to ensure that the resulting scheme is convergent for the surface Stokes problem (<ref>).
To clearly communicate the main ideas and to keep technicalities at minimum, we focus
on a polyhedral approximation to γ and on
the lowest-order MINI pair, which in the Euclidean setting
takes the discrete velocity space to be the (vector-valued) linear Lagrange space
enriched with cubic bubbles, and the discrete pressure space to be
the (scalar) Lagrange space. We expect the main ideas to be applicable
to other finite element pairs (e.g, Taylor–Hood, Scott-Vogelius <cit.>),
although the stability must be shown on a case-by-case basis. Below we
present numerical experiments demonstrating the viability of our approach for
ℙ^2 surface approximations paired with a ℙ^2-ℙ^1 Taylor-Hood finite element space and plan to address
generalizations of our approach more fully in future works.
The rest of the paper is organized as follows.
In the next section, we introduce the notation and provide
some preliminary results. In Section <ref>,
we define the surface finite element spaces based on the classical
MINI pair. Here, we show that the spaces have optimal-order approximation
properties and are inf-sup stable. We also establish weak continuity
properties of the discrete velocity space via an H^1-conforming relative
on the true surface. In Section <ref>, we define the finite element
space and prove optimal-order estimates in the energy norm. Finally in Section <ref>
we provide numerical experiments which support the theoretical results.
§ NOTATION AND PRELIMINARIES
We assume γ⊂^3
is a smooth, connected, and orientable two-dimensional surface
without boundary. The signed distance function
of γ is denoted by d, which
satisfies d<0 in the interior of γ and d>0 in the
exterior.
We set (x) = d(x) to be the outward-pointing unit normal
(where the gradient is understood as a column vector)
and H(x) = D^2 d(x) the Weingarten map.
The tangential projection operator
is = I - ⊗,
where I is the 3× 3 identity matrix,
and the outer product
of two vectors a and b satisfies
( a⊗ b)_i,j = a_i b_j.
The smoothness of γ ensures
the existence of δ>0 sufficiently small such
that the closest point projection
(x) := x-d(x)(x)
is well defined in
the tubular region
U = {x∈^3: dist(x,γ)≤δ}.
For a scalar function q:γ→
we define its extension
q^e:U→ via q^e = q∘.
Likewise, for = (v_1,v_2,v_3)^⊺ :γ→^3
its extension ^e:U→^3 satisfies
(^e)_i = v_i^e for i=1,2,3.
Define the surface gradient _γ q = q^e,
and for a (column) vector field = (v_1,v_2,v_3)^⊺ :γ→^3,
we let ^e = ( v_1^e, v_2^e, v_3^e)^⊺
denote the Jacobian matrix of ^e.
We then see (^e )_i,: = (( v^e_i)^⊺) = ( v^e_i)^⊺ = (∇_γ v_i)^⊺,
i.e., the ith row of ^e coincides with (∇_γ v_i)^⊺.
The tangential surface gradient (covariant derivative) of is
defined by _γ = ^e,
and the surface divergence operator of
is div_γ = tr(_γ).
The deformation of a tangential vector field
is defined as the symmetric part of its surface gradient, i.e.,
Def_γ = 1/2(_γ+(_γ)^⊺).
For a matrix field A:^3× 3, the divergence
div_γ A is understood to act row-wise.
Let L_2(γ) denote the space of square-integrable
functions on γ and let L_2(γ)
be the subspace of L_2(γ)
consisting of L_2-functions with vanishing mean.
We let W_p^m(γ) be the Sobolev space
of order m and exponent p on γ with corresponding norm ·_W^m_p(γ).
We use the notation H^m(γ) = W_2^m(γ) with ·_H^m(γ) = ·_W^m_2(γ),
and the convention |·|_H^0 = ·_L_2, |·|_W^0_p = ·_L_p.
Analogous vector-valued spaces are denoted in boldface (e.g., _2(γ) = (L_2(γ))^3
and ^1(γ) = (H^1(γ))^3).
We let _T^1(γ)
be the subspace of ^1(γ) whose members
are tangent to γ, and set
( div_γ;γ) = {∈_2(γ): div_γ∈ L_2(γ)}.
Let Γ_h be a polyhedral surface
approximation of γ with triangular faces.
We assume that Γ_h is an O(h^2)
approximation in the sense that d(x) = O(h^2)
for all x∈Γ_h. We further assume
h is sufficiently small to ensure Γ_h⊂ U, in particular,
the closest point projection is well-defined on Γ_h.
We denote by _hthe set of faces of Γ_h,
and assume this triangulation is shape-regular (i.e.,
the ratio of the diameters of the inscribed and circumscribed
circles of each face is uniformly bounded).
For simplicity and to ease the presentation,
we further assume that _h is quasi-uniform, i.e.,
h:=max_K' diam(K') ≈ diam(K)
for all K∈_h.
The image of the mesh elements and the resulting set on the exact surface are given, respectively, by
K^γ = (K), _h^γ = {( K): K∈_h}.
We use the notation a≲ b (resp., a≳ b) if
there exists a constant C>0 independent of the mesh parameter h
such that a≤ C b (resp., a≥ C b). The statement
a≈ b means a≲ b and a≳ b.
Set _h to be the set of vertices in _h,
and for each K∈_h, let _K denote
the set of three vertices of K.
For each a∈_h, let _a⊂_h
denote the set of faces having a as a vertex.
For K∈_h, we define
the patches
ω_K = ⋃_K'∈_h_K̅'∩K̅≠∅ K', ω_K' = ⋃_K'∈_h_K̅'∩ω_K≠∅ K',
so that ω_K⊂ω_K' ⊂Γ_h.
The patches ω_K^γ and ω'_K^γ
associated with K^γ = (K) are defined analogously.
The (piecewise constant) outward unit normal of Γ_h
is denoted by _h,
and we shall use the notation _K = _h|_K∈^3,
its restriction to K∈_h.
We assume that |∘ - _h|≲ h.
The tangential projection
with respect to Γ_h is _h = I-_h⊗_h,
and we assume there exists c>0 independent of h such that
·_h≥ c>0 on Γ_h.
We let μ_h(x) satisfy μ_h dσ_h(x) = dσ((x)),
where dσ and dσ_h are surface measures
of γ and Γ_h, respectively. In particular,
∫_Γ_h (q∘)μ_h = ∫_γ q ∀ q∈L_1(γ).
From <cit.>,
we have
μ_h(x) = (x)·_h(x) ∏_i=1^2 (1-d(x)κ_i(x)) x∈Γ_h,
and
|1-μ_h(x)|≲ h^2,
where {κ_1,κ_2} are the eigenvalues
of H, whose corresponding eigenvectors are orthogonal to .
We set μ_K = μ_h|_K to be the restriction of μ_h to K∈_h.
Surface differential operators with respect to Γ_h
are denoted and defined analogously to those on γ.
We also set (m∈ℕ)
^m_h(Γ_h) = {∈_2(Γ_h): |_K∈^m(K) ∀ K∈_h}, _H^m_h(K)^2 = ∑_K∈_h_H^m(K)^2
to be the piecewise H^m Sobolev space and norm, respectively.
Likewise, ^m_h(γ) is the piecewise
Sobolev space with respect to _h^γ with
corresponding norm _H^m_h(γ)^2 = ∑_K∈_h_H^m(K^γ)^2,
and Def_γ,h denotes the piecewise deformation operator with respect to _h^γ.
We end this section by stating a well-known characterization of
( div_Γ_h;Γ_h) ={∈_2(Γ_h): div_Γ_h∈ L_2(Γ_h)}.
For each edge e of the mesh,
denote by K_1^e,K_2^e∈_h
the two triangles in the mesh such that e = K_1^e∩ K_2^e.
Let _j^e denote the outward in-plane normal to K_j^e,
and note that in general, ^e_1≠ -^e_2 on e.
Then a vector field ∈^1_h(Γ_h) satisfies
∈( div_Γ_h;Γ_h) if and only if <cit.>
_1·_1^e|_e + _2·_2^e|_e = 0 for all edges e,
where _j = |_K_j^e.
§.§ Extensions and lifts
For the rest of the paper,
we view the closest point projection
as a mapping from the discrete surface approximation
to the true surface, i.e.,
:Γ_h→γ.
Restricted to Γ_h
the projection is a bijection,
and in particular has a well-defined
inverse: ^-1:γ→Γ_h.
Recall that for a scalar or vector-valued function q on γ,
its extension (now to Γ_h) is q^e = q∘.
For a scalar or vector-valued function q defined on Γ_h, we define its lift
via q^ℓ = q∘^-1. Note that (q^ℓ)^e = q on γ and likewise, (q^e)^ℓ = q on Γ_h.
For q∈ H^m_h(γ) (m=0,1,2),
there holds
q_H^m(K^γ)≈ q^e_H^m(K) ∀ K∈_h,
which follows
from a change of variables, the chain rule, and the smoothness assumptions
of γ (cf. <cit.>).
§.§ Surface Piola transforms
Following <cit.> we summarize the divergence-conforming
Piola transform with respect to a mapping between surfaces.
Let _0 and _1 be two sufficiently smooth surfaces,
and let Φ:_0→_1 be a differeomorphism with inverse
Φ^-1:_1→_0. Let dσ_i be
the surface measure of _i,
and let μ formally satisfy μ dσ_0 = dσ_1.
Then the Piola transform of a vector field :_0→^3
with respect to Φ
is given by
_Φ = μ^-1 DΦ.
Likewise, for :_1→^3 its Piola transform
with respect to Φ^-1 is
_Φ^-1 = μ D Φ^-1.
Similar to the Euclidean setting, there holds
div__0 = μ div__1_Φ ∀∈( div;_0),
in particular, _Φ:( div__0;_0)→( div__1;_1)
and _Φ^-1:( div__1;_1)→( div__0;_0)
are bounded mappings. Moreover, as DΦ and DΦ^-1
are tangent maps, the Piola transform yields tangential vector fields:
if _j is the unit normal of the surface
_j,
then (_Φ)·_1=0 on _1
and (_Φ^-1)·_0=0 on _0.
In the case Φ =, _0 = Γ_h,
and _1 = γ (so that μ=μ_h),
the Piola transform of :Γ_h→^3 with _h =
is <cit.>
∘ := _ = 1/μ_h[ - d H] ,
whereas the Piola transform of :γ→^3
with respect to the inverse ^-1 is given by
:= _^-1 = μ_h [ I- ⊗_h/·_h][ I-d H]^-1 (∘).
Note that = on γ and = on Γ_h.
Moreover, it follows from (<ref>) that for all K∈_h,
∫_K^γ ( div_γ) q = ∫_K ( div_Γ_h) q^e ∀∈( div_γ;K^γ), q∈L_2(K^γ).
and
∫_K ( div_Γ_h) q = ∫_K^γ ( div_γ) q^ℓ ∀∈( div_Γ_h;K), q∈L_2(K).
The following lemma states
the equivalence of norms of vector fields and their Piola transforms.
For K∈_h, let :K^γ→^3
and = _^-1:K→^3
be related by (<ref>) restricted to K.
Then if ∈^m(K^γ) for some m∈{0,1,2},
there holds ∈ H^m(K). Moreover,
_H^m(K^γ)≈_H^m(K).
The proof for the cases m=0,1 is found in <cit.>.
The case m=2 follows from similar arguments and is therefore omitted.
We also need a similar result
that relates the L_2 norm
of the the deformation tensors
of and its Piola transform .
The proof of the following result
is given in the appendix.
For K∈_h, let ∈_T^1(K) and ∈_T^1(K^γ)
be related via = _.
Then
| Def_γ-( Def_Γ_h)∘^-1|≲ h (|(_Γ_h)∘^-1|+ |∘^-1|).
Consequently, by a change of variables,
Def_γ_L_2(K^γ)≲ Def_Γ_h_L_2(K)+ h ( _Γ_h_L_2(K)
+ _L_2(K)).
We now apply the above definitions of Piola transforms to mappings between planes (surface triangles), which is critical to our construction of vertex degrees of freedom for vector fields on Γ_h.
For each vertex a∈_h in the triangulation,
we arbitrarily choose a single (fixed) face K_a∈_a.
For K∈_a, we define
_a^K:^3→^3 by
_a^K = (_K_a·_K [ I - _K_a⊗_K/_K_a·_K]) ,
where we recall _K_a and _K are the outward unit normals of K_a and K, respectively.
In particular, _a^K is the Piola transform of with respect
to the inverse of the closest point projection onto the plane containing K_a (cf. (<ref>)).
By properties of the Piola transform, _a^K
is tangential to K, i.e., (_a^K )·_K=0 for all x∈ℝ^3.
We next show that the “ideal” and “practical” interpretations of vectors at vertices discussed in the introduction (cf. Figures <ref> and <ref>) do not differ by too much.
Fix a∈_h and let lie in the tangent plane of γ at
(a). For K∈_a, let _K = _^-1|_K be the Piola transform of to K via the inverse of the closest point projection (cf. (<ref>)).
Then
|_K-ℳ_a^K _K_a|
≲ h^2 |_K_a| ≤ h^2 ||.
Using H = 1/2 ||^2 = 0,
we have H = H.
Note in addition that [ I - ⊗_K/·_K ] = [ I - ⊗_K/·_K ].
Therefore by (<ref>), (<ref>), and (<ref>) we have
_K
= μ_K [ I - ⊗_K/·_K ] [ I - d H ] ^-1
= μ_K [ I - ⊗_K/·_K ] [ I - d H ] ^-11/μ_K_a [ - d H ] _K_a
= μ_K [ I - ⊗_K/·_K ] [ I - d H ] ^-11/μ_K_a [ I- d H ] _K_a
= ·_K/·_K_a [ I - ⊗_K/·_K ] Π_K_a
= ·_K/·_K_a [ I - ⊗_K/·_K ] _K_a.
Now |-_K| + |_K_a-_K| ≲ h, |1-·_K| = 1/2 |-_K|^2 ≲ h^2, and |1-_K_a·_K|≲ h^2.
Thus employing (<ref>) and the identity _K_a·_K_a=0 we have
|_K-_a^K _K_a|
= | ( ·_K/·_K_a [ I - ⊗_K/·_K ] - _K_a·_K [ I - _K_a⊗_K/_K_a·_K ] ) _K_a |
≲ | [ I - ⊗_K ] - [ I - _K_a⊗_K ] _K_a |+h^2 |_K_a|
= |( -_K_a) ⊗_K _K_a| + h^2 |_K_a|
= |(-_K_a) ⊗ (_K-_K_a) _K_a| + h^2 |_K_a|
≲ h^2 |_K_a|.
Finally noting that |_K_a| = | _|_K_a^-1| ≲|| (cf. (<ref>)) completes the proof.
In SFEMs it is common to use a higher-order surface approximation
Γ to γ of polynomial degree k (here we consider k=1). In that case ν_K is no
longer constant on K, and we have |ν(a)-ν_K(a)|≲ h^k. The results of Lemma <ref>
easily generalize to this situation with h^2k replacing h^2 on the right hand side of (<ref>).
§ FINITE ELEMENT SPACES AND INF-SUP STABILITY
By utilizing Lemma <ref>, we can construct tangential
finite element spaces on the surface approximation Γ_h using nodal (Lagrange) basis functions.
The essential idea is to enforce continuity at nodal degrees of freedom in a weak sense through
the mapping _a^K given in Definition <ref>. Although this procedure
does not yield a globally continuous finite element space, it preserves in-plane normal continuity and exhibits
weak continuity properties. These properties are generally sufficient for achieving convergence in second-order elliptic problems.
In the following discussion, we focus on the construction of the lowest-order MINI Stokes pair for simplicity <cit.>.
However, we expect that Definition <ref> and Lemma <ref> provide a general framework
for constructing convergent finite element schemes based on classical and conforming finite element pairs such as Taylor-Hood and Scott-Vogelius <cit.>.
§.§ Surface MINI space and approximation properties
Let K̂ be the reference triangle with vertices (0,0),(1,0),(0,1),
and for K∈_h, let F_K:K̂→ K be
an affine diffeomorphism.
The constant Jacobian matrix of F_K is denoted
by DF_K∈^3× 2. Note that the columns
of DF_K span the tangential space of K.
For a vector-valued function :K̂→^2,
its Piola transform with respect to F_K is given by
(x) = (_F_K)(x):= 1/JDF_K (x̂), x = F_K(x̂),
and J = √((DF_K^⊺ DF_K)).
Let b_K be the standard cubic bubble function on K,
i.e., the product of the three barycentric coordinates of K.
The local MINI space defined on the reference triangle
is given by :=_1(K̂) ⊕ b_K̂_0(K̂),
where _k(D) is the space of polynomials of degree ≤ k
with domain D, and _k(D) = [_k(D)]^2.
We then define the surface finite element spaces
on Γ_h as
_h = {∈_2(Γ_h):∀ K∈_h ∃∈, _K = _F_K ;_K(a) = _a^K(_K_a(a)) ∀ K∈_a, ∀ a∈_h},
Q_h = {q∈ H^1(Γ_h)∩L_2(Γ_h): q_K∈_1(K) ∀ K∈_h},
where _K = |_K is the restriction of to K∈_h,
_a^K is defined in Definition <ref>, and we recall _h
is the set of vertices in _h.
For ∈_h, we let _L denote the linear portion of , i.e.,
_L is the unique tangential and piecewise linear vector
in _h satisfying (_L)_K_a(a) = _K_a(a) for all a∈_h.
We then have the following identity on each K∈_h:
= _L + 60 b_K _K ( - _L) ∀∈_h,
where
we have used the fact ∫_K b_K = |K|/60 for all K∈_h.
Because the columns of
DF_K span the tangential space of K,
we see that functions in the discrete velocity space _h are tangential, i.e., ·_h = 0
for all ∈_h. In addition, due to the normal-preserving properties
of the transform _a^K, the space is H( div)-conforming
as the next result shows.
There holds _h ⊂( div_Γ_h;Γ_h).
Due to the properties of the cubic bubble, it is sufficient
to show that the linear component of ∈_h
satisfies the in-plane normal continuity condition (<ref>)
across all edges in _h.
Let a∈_h be a vertex of _h,
and let K_1,K_2∈_a be two elements that have a
as a vertex and share a common edge e = K_1∩ K_2.
Denote by ^e_j the in-plane outward unit normal
vector with respect to K_j restricted to e.
Using the definitions of the finite element space and
the operator _a^K, along with the Binet-Cauchy identity,
there holds for any ∈_h,
_j(a) ·_j^e
= _a^K_j(_K_a(a)) ·^e_j
= (_K_a·_K_j)(_K_a(a) ·^e_j) - (_K_a·^e_j)(_K_j·_K_a(a)) =
(_K_a×_K_a(a))· (_K_j×^e_j),
where _j = _K_j = |_K_j.
Therefore,
_1(a) ·^e_1+_2(a)·^e_2
= (_K_a×_K_a(a))·((_K_1×^e_1) +(_K_2×^e_2))=0.
Because _j is a linear polynomial on e,
we conclude that (<ref>) is satisfied on all edges.
This implies the desired result _h ⊂( div_Γ_h;Γ_h).
For each ∈(γ)∩^1_T(γ)∩_h^2(γ),
there exists _h ∈_h such that
h^m -_h_H^m(K)≲ h^2 _H^2_h(ω_K^γ) ∀ K∈_h, m=0,1,
with = _^-1.
Given ∈(γ)∩^1_T(γ)∩_h^2(γ),
we uniquely define :=_h ∈_h such that each
component of is a piecewise linear polynomial
and satisfies
_K_a(a) = _K_a(a) ∀ a∈_h.
Let _I
be the elementwise (discontinuous), linear interpolant of with respect to the vertices in _h, i.e.,
(_I)_K∈_1(K) with (_I)_K(a) = _K(a) for all K∈_h and a∈_K.
By Lemma <ref> (with = (a)), (<ref>),
and the definition of _h we have for each vertex a∈_h,
|(_I - )_K(a)| = |(_K - _a^K _K_a)(a)| = |(_K - _a^K _K_a)(a)| ≲
h^2 |_K_a(a)|
∀ K∈_a.
Consequently, by standard inverse estimates,
h^m _I- _H^m(K) ≲_I - _L_2(K)
≲ h_I-_L_∞(K) =
hmax_a∈_K |(_I - )_K(a)|≲ h^3 max_a∈_K |_K_a(a)| m=0,1.
Using inverse estimates once again,
and applying standard interpolation results yields
max_a∈_K |_K_a(a)|
≤_I_L_∞(ω_K)
≲ h^-1_I_L_2(ω_K)
≲ h^-1 (_L_2(ω_K)+ - _I_L_2(ω_K))
≲ h^-1 (_L_2(ω_K)+ h^2 ||_H^2_h(ω_K)).
Therefore,
h^m _I - _H^m(K)≲ h^2 _H^2_h(ω_K) m=0,1,
and so by (<ref>),
h^m -_H^m(K) ≲ h^m -_I_H^m(K)+h^m _I-_H^m(K)≲ h^2 _H^2_h(ω_K^γ).
There exists a constant α>0 independent
of h such that
sup_∈_h\{0}∫_Γ_h ( div_Γ_h)q/_H^1_h(Γ_h)≥αq_L_2(Γ_h) ∀ q∈ Q_h.
Fix q∈ Q_h, and
let ∈^1_T(γ) satisfy <cit.>
div_γ = (μ_h^-1q)^ℓ∈L_2(γ), and _H^1(γ)≲(μ_h^-1q)^ℓ_L_2(γ)≲q_L_2(Γ_h),
where we used (<ref>) and (<ref>)
in the last step. Let ^SZ_h ^e
be the Scott-Zhang interpolant of the extension ^e∈^1(Γ_h)
onto the space of continuous piecewise linear polynomials with respect to _h<cit.>, and set _h = (^SZ_h ^e)^ℓ∈^1_T(γ).
From (<ref>), -_h = Π (-^SZ_h ^e), and approximation properties of the Scott-Zhang interpolant,
there holds on each K∈_h,
_h_H^1(K^γ)+h^-1 - _h_L_2(K^γ) ≲(^SZ_h ^e)^ℓ_H^1(K^γ)+h^-1 - (^SZ_h ^e)^ℓ_L_2(K^γ)
≲^SZ_h ^e_H^1(K)+h^-1^e - ^SZ_h ^e_L_2(K)
≲^e_H^1(ω_K)≲_H^1(ω_K^γ).
Noting _h∈(γ)∩^1_T(γ)∩_h^2(γ),
we define ∈_h such that
_K = (_h _h)_K + 60 b_K _K (-_h _h) ∀ K∈_h,
where _h _h is given in Lemma <ref>.
We then have ∫_K = ∫_K, and by (<ref>) and Lemma <ref>,
_H^1(K) ≤_h _h_H^1(K)
+-_h _h_H^1(K)
≲_H^1(K)+-_h _h_H^1(K)+ h^-1-_h _h_L_2(K)
≤_H^1(K)+-_h_H^1(K)+_h-_h _h_H^1(K)
+h^-1 (-_h_L_2(K)+_h-_h _h_L_2(K))
≲_H^1(ω_K^γ) + h_h_H^2_h(ω_K^γ).
By (<ref>), a standard inverse estimate, and the H^1-stability
properties of the Scott-Zhang interpolant,
h_h_H^2_h(ω_K^γ)≲ h (^SZ_h ^e)^ℓ_H^2_h(ω_K^γ)≲ h^SZ_h ^e_H^2_h(ω_K)≲^SZ_h^e_H^1(ω_K)≲_H^1(ω'_K^γ),
and so by (<ref>),
_H^1(K)≲_H^1(ω'_K_γ) ∀ K∈_h ⟹ _H^1(Γ_h)≲_H^1(γ)≲q_L_2(Γ_h).
Next, integration by parts, the identity ∫_K = ∫_K,
and applying
(<ref>) yields
∫_Γ_h ( div_Γ_h) q
= -∫_Γ_h·_Γ_h q
= -∫_Γ_h·_Γ_h q
= ∫_Γ_h ( div_Γ_h) q
= ∫_γ ( div_γ) q^ℓ.
We then use (<ref>), (<ref>), and (<ref>)
to obtain
∫_Γ_h ( div_Γ_h) q
≳q_L_2(Γ_h)^2.
This identity combined with (<ref>) completes the proof.
§.§ H^ 1_T-conforming approximations to discrete functions
While the finite element space
_h is merely ( div_Γ_h;Γ_h)-conforming (cf. Proposition <ref>),
the following lemma shows that functions
in this space are “close” to an ^1-conforming relative.
Given ∈_h, denote by = _ its Piola transform via the closest point projection to γ. Then there exists _c ∈_T^1(γ) such that
- _c _L_2(K^γ) + h | - _c |_H_h^1(K^γ)≲ h^2 _L_2(K^γ) ∀ K^γ∈_h^γ.
On an element K∈_h, we first write =_L + b_K, where _L is componentwise
affine on K, and ∈^3 is tangent to K (cf. (<ref>)). Likewise, = _L + b_K^ℓ is
the Piola transform of to γ. We next let be the unique
continuous piecewise linear polynomial with respect to _h satisfying (a)=_K_a(a) for each vertex a∈_h.
We then set
_c=-d/(1-d κ_1)(1-dκ_2)^ℓ + b_K^ℓ = μ_h^-1·_h (-d) _ℓ + b_K^ℓ.
Note that _c∈_T^1(γ), and
- _c = _L - μ_h^-1·_h (-d) ^ℓ.
Fixing K ∈_h, by norm equivalence (cf. (<ref>)) we prove (<ref>) by establishing that
_L - μ_h^-1·_h (-d) ^ℓ_L_2(K) + h |_L - μ_h^-1·_h (-d) ^ℓ |_H_h^1(Γ_h)≲ h^2 _L_2(K),
where μ_h^-1·_h (-d) ^ℓ = _|_K^-1μ_h^-1·_h (-d) ^ℓ.
We show (<ref>) in three steps.
(i) Employing (<ref>), (<ref>), and _h (·_h - ⊗_h) = ·_h - ⊗_h, we have that
μ_h^-1·_h (-d) ^ℓ = _K ((·_K) I-⊗_K) .
Here we interchangeably write _h=_K and _h =_K in order to better distinguish dependence on the element K. Using _L ·_K=0 and _K _L=_L, we then have
_L- μ_h^-1·_h (-d) ^ℓ =_K [ _L-((·_K) I-⊗_K) ]
= [(1-·_K) _L ]+ [·_K _K (_L-) ]+ [_K (-_K) _K · (- _L)]
=: I + II + III.
(ii) We next bound the terms I, II, and III in L_2. Using |1-·_K| = 1/2 |-_K|^2 ≲ h^2 yields
I_L_2(K)≲ h^2 _L_L_2(K).
Next we use (<ref>) and recall that _K_a(a) ·_K_a=0 to compute that for each vertex a ∈ K
|_K (_L-)(a)| = |_K (_a^K- I) _K_a(a) |
= |_K [((_K_a·_K) I-_K_a⊗_K)- I]_K_a(a)|
= |(_K_a·_K-1) _K _K_a(a)- _K (_K_a-_K) (_K-_K_a) ·_K_a(a)|
≲ h^2 |_K_a(a)| ≲h^2 |_K(a)|.
In the last step we have employed (<ref>) and (<ref>) to obtain |_K_a(a)| = |1/_K ·_K_a_K_a_K(a)| ≲ |_K(a)|. We then use the fact that _K(_L-) and _L are affine, along with inverse inequalities, to obtain
II_L_2(K) ≲_K(_L-)_L_2(K)≲ h max_a∈_K |_K (_L-)(a)|
≲ h^3 _L_L_∞(K)≲ h^2_L_L_2(K).
In order to bound III, we first proceed similarly as(<ref>) to obtain
|(_L-)(a)|= |(_K_a·_K-1) _K_a(a) -_K_a ( _K-_K_a) ·_K_a(a)| ≲ h|_K(a)|.
Using |_K (-_K)| ≲ h, we thus have similar to above that
III_L_2(K)≲ h _L-_L_2(K)≲ h^2 _L_L_2(K).
Recalling that =_L+ b_K and b_K(a)=0 at vertices a, we again use inverse inequalities to obtain
_L_L_2(K)≲ h _L_L_∞(K)= h max_a ∈_K |_L(a)| ≤ h _L_∞(K)≲_L_2(K).
Collecting the inequalities (<ref>), (<ref>)–(<ref>)
yields - ^c_L_2(K^γ)≲ h^2 _L_2(K^γ).
(iii) In order to bound ∇_Γ_h( _L- μ_h^-1·_h (-d) ^ℓ)_L_2(K), we recall (<ref>) and consider first ∇_Γ_h I. First note that |∇ (1-·_K)| = |_K| = | (_K-)| ≲ h. Thus using an inverse inequality and |1-·_K| ≲ h^2, we obtain
∇_Γ_h I_L_2(K)≲1-·_K_L_∞(K)∇_Γ_h_L _L_2(K)
+ ∇(1-·_K)_L_∞(K)_L_L_2(K)≲ h _L_L_2(K).
Employing inverse inequalities, |∇ (·_K)| ≲ h, and (<ref>) also yields
∇_Γ_h II _L_2(K) ≤∇ ( ·_K)_L_∞(K)_K (_L-)_L_2(K) + ·_K_L_∞(K)∇_Γ_h [_K(_L-)]_L_2(K)
≲ h_K (_L-)_L_2(K) + h^-1_K (_L-)_L_2(K)
≲ h _L_L_2(K).
We finally compute using inverse inequalities and (<ref>) that
∇_Γ_h III_L_2(K) ≲∇ [_K (-_K)]_L_∞(K)_L-_L_2(K)
+ _K(-_K)_L_∞(K)∇_Γ_h(_L-)_L_2(K)
≲ (1+h h^-1) _L-_L_2(K)≲ h _L_L_2(K).
Collecting the above inequalities
and employing (<ref>) yields
∇_Γ_h (I+II+III)_L_2(K)≲ h _L_L_2(K)≲ h _L_2(K),
which completes the proof.
§.§ Discrete Korn-type inequalities
From Lemma <ref>, we immediately obtain
a discrete Korn-type inequality on the exact surface γ.
Given ∈_h, there holds
_H_h^1(γ)≲_L_2(γ) + Def_γ,h_L_2(γ),
where = _.
Given ∈_h, let _c ∈_T^1(γ) satisfy (<ref>).
A continuous Korn inequality holds for _c, so using (<ref>) we have
_H_h^1(γ) ≲_c _H^1(γ) + _c - _H_h^1(γ)
≲_c _L_2(γ) + Def_γ_c _L_2(γ) + _c - _H_h^1(γ)
≲_L_2(γ) + Def_γ, h_L_2(γ) + _c - _H_h^1(γ)
≲ (1+h) _L_2(γ) + Def_γ, h_L_2(γ).
From this result, we obtain a discrete Korn inequality
for _h on Γ_h.
There holds
_H^1_h(Γ_h)≲_L_2(Γ_h)+ Def_Γ_h_L_2(Γ_h) ∀∈_h.
We apply Lemma <ref>, (<ref>), and an inverse estimate:
_H^1_h(Γ_h) ≲_H^1_h(γ)
≲_L_2(Γ_h)+ Def_γ,h_L_2(γ)
≲_L_2(Γ_h)+ Def_Γ_h_L_2(Γ_h)+ h _H^1_h(Γ_h)
≲_L_2(Γ_h)+ Def_Γ_h_L_2(Γ_h).
§ FINITE ELEMENT METHOD AND CONVERGENCE ANALYSIS
For ,∈^1_T(γ) and q∈ L_2(γ),
we define the bilinear forms
a_γ(,)
= ∫_γ Def_γ,h: Def_γ,h +∫_γ·,
b_γ(,q)
= -∫_γ ( div_γ)q.
The variational formulation for the Stokes problem (<ref>)
seeks (,p) ∈_T^1(γ)×L_2(γ) satisfying
a_γ(,) + b_γ(,p) = ∫_γ f· ∀∈_T^1(γ),
b_γ(,q) = 0 ∀ q∈L_2(γ).
In order to ensure the well-posedness of (<ref>) and avoid technical complications associated with Killing fields,
we include the zeroth-order mass term in the momentum equations, as mentioned earlier in the introduction.
A method for incorporating Killing fields
into surface finite element methods for the Stokes problem is presented in <cit.>,
and the main ideas presented there are applicable to the proposed discretization below.
We define the analogous bilinear forms
with respect to the discrete surface Γ_h:
a_Γ_h(,)
= ∫_Γ_h Def_Γ_h: Def_Γ_h +∫_Γ_h·,
b_Γ_h(,q)
= -∫_Γ_h ( div_Γ_h)q,
where the differential operator Def_Γ_h is understood to act piecewise
with respect to _h. Then
the finite element method seeks (_h,p_h)∈_h× Q_h such that
a_Γ_h(_h,) + b_Γ_h(,p_h) = ∫_Γ_h f_h· ∀∈_h,
b_Γ_h(_h,q) = 0 ∀ q∈ Q_h,
where f_h is some approximation of f that is defined on Γ_h.
By the inf-sup condition (<ref>), the discrete Korn-like inequality (<ref>),
and standard theory of saddle-point problems,
there exists a unique solution (<ref>).
To derive error estimates, we restrict
(<ref>) to the discretely divergence–free subspace
_h := {∈_h: ∫_Γ_h ( div_Γ_h)q = 0 ∀ q∈ Q_h}.
Then _h∈_h is uniquely determined by the problem
a_Γ_h(_h,) =
∫_Γ_h f_h · ∀∈_h.
Now set _h = __h,
= _, and note that
∫_Γ_h f_h· = ∫_Γ_h f_h ·_^-1 = ∫_γ F_h ·,
where F_h =( M^⊺ f_h)^ℓ, and
M = [ I-⊗_h/·_h][ I-d H]^-1
is the matrix arising in the definition of _^-1.
Therefore (<ref>) is equivalent to the statement
a_γ(_h,) =
∫_γ F_h· +G_h(_h,) ∀∈_h,
where the bilinear form G_h: ^1_h(Γ_h)×^1_h(Γ_h)→ℝ
given by
G_h(,) =a_γ(,)-a_Γ_h(,)
encodes geometric error.
There holds
|G_h(,)|≲ h _H^1_h(γ)_H^1_h(γ)
for all tangential ,∈^1_h(Γ_h).
We write
∫_γ Def_γ,h: Def_γ,h - ∫_Γ_h Def_Γ_h: Def_Γ_h
= ∫_γ Def_γ,h: Def_γ,h - ∫_γ (μ_h^-1 Def_Γ_h)∘^-1 :( Def_Γ_h)∘^-1
= ∫_γ( Def_γ,h-(μ_h^-1 Def_Γ_h)∘^-1): Def_γ,h
- ∫_γ (μ_h^-1 Def_Γ_h)∘^-1 :(( Def_Γ_h)∘^-1- Def_γ,h).
Applying (<ref>), (<ref>), and Lemma <ref>, we obtain
|∫_γ Def_γ,h: Def_γ,h - ∫_Γ_h Def_Γ_h: Def_Γ_h|
≲ h _H^1_h(γ)_H^1_h(γ).
Next, we use the formula of the Piola transform
involving M to obtain
∫_γ· - ∫_Γ_h· = ∫_γ· -∫_γ (μ_h∘^-1) ^⊺ M^⊺ M
= ∫_γ^⊺[-(μ_h ∘^-1) M^⊺ M] .
A short
computation using (<ref>) yields |-(μ_h ∘^-1) M^⊺ M| ≲ |(-_h/·_h) ⊗ (-_h/·_h)| + h^2 ≲ h^2. Thus
|∫_γ· - ∫_Γ_h·|≲ h^2 _L_2(γ)_L_2(γ).
The result (<ref>) follows from (<ref>)–(<ref>).
The next lemma states the approximation properties
of the discretely divergence–free subspace _h.
The result essentially follows from the inf-sup condition (<ref>)
and the arguments
in <cit.>. For completeness
we provide the proof.
Let ∈^1(γ) satisfy _γ=0.
Then there holds
inf_∈_h-_H^1_h(γ)≲inf_∈_h-_H^1_h(γ).
Fix ∈_h. The inf-sup condition (<ref>) implies there exists ∈_h such that
b_Γ_h(,q) = b_γ(-,q^ℓ) for all q∈ Q_h, and _H^1_h(Γ_h)≲-_H^1_h(γ).
Then +∈_h and -(+)_H^1_h(γ)≤-_H^1_h(γ)+_H^1_h(γ)≲-_H^1_h(γ).
This implies the desired result.
Let (_h,p_h)∈_h× Q_h
satisfy the finite element method (<ref>).
Let _h = __h denote the Piola
transform of _h with respect to the closest point projection
, and let p^ℓ_h = p_h∘^-1. Then there holds
-_h_H^1_h(γ) ≲inf_(,q)∈_h× Q_h(-_H^1_h(γ)+ p-q^ℓ_L_2(γ))+ h^2 f_L_2(γ) + f- F_h_L_2(γ)
+h(p_L_2(γ)+ _H^1(γ)+ f_h_L_2(Γ_h)),
p-p_h^ℓ_L_2(γ) ≲inf_q∈ Q_hp-q^ℓ_L_2(γ)
+-_h_H^1_h(γ)+
h^2 f_L_2(γ) + f- F_h_L_2(γ)
+h(p_L_2(γ)+ _H^1(γ)+ f_h_L_2(Γ_h)).
Therefore, by Lemma <ref>, if (,p)∈^2(γ)× H^1(γ), there holds
-_h_H^1_h(γ)+p-p_h^ℓ_L_2(γ) ≲ h(_H^2(γ)+p_H^1(γ)+ f_h_L_2(Γ_h)) + f- F_h_L_2(γ).
For ∈_h, we denote by _c∈_T^1(γ) the conforming
relative of = _ satisfying (<ref>).
Using (<ref>), (<ref>) and (<ref>), we write
a_γ(-_h,)
= a_γ(,_c) + a_γ(,-_c) - ∫_γ F_h · - G_h(_h,)
= ∫_γ f·_c - ∫_γ F_h ·-b_γ(_c,p)+ a_γ(,-_c) - G_h(_h,)
= ∫_γ f· (_c-) -∫_γ ( F_h- f) ·
-b_γ(,p-q^ℓ) -b_γ(_c-,p)
+ a_γ(,-_c) - G_h(_h,) ∀ q∈ Q_h.
Applying continuity estimates of the bilinear forms,
(<ref>), and (<ref>) yield
a_γ(-_h,)
≲(h^2 f_L_2(γ) + f- F_h_L_2(γ) +p-q^ℓ_L_2(γ)
+hp_L_2(γ)+h _H^1(γ)+ h_h_H^1_h(γ))
_H^1_h(γ).
The estimate _h_H^1_h(γ)≲_h_H^1_h(Γ_h)≲ f_h_L_2(Γ_h), and standard arguments then yield
-_h_H^1_h(γ) ≲inf_(,q)∈_h× Q_h(-_H^1_h(γ)+ p-q^ℓ_L_2(γ))
+ h^2 f_L_2(γ) + f- F_h_L_2(γ)
+h(p_L_2(γ)+ _H^1(γ) + f_h_L_2(Γ_h)).
The estimate (<ref>) then
follows by applying Lemma <ref>.
For the pressure error,
we similarly apply (<ref>), (<ref>), (<ref>), and (<ref>)–(<ref>)
to obtain for all ∈_h and q∈ Q_h,
b_Γ_h(,p_h-q)
= ∫_Γ_h f_h · - a_Γ_h(_h,)-b_γ(,q^ℓ)
= ∫_γ F_h · - a_γ(_h,) -b_γ(,q^ℓ) +G_h(_h,)
= ∫_γ f· (-_c)+∫_γ ( F_h - f)· +a_γ( - _h,) +b_γ(,p-q^ℓ) +G_h(_h,)
-a_γ(,-_c)-b_γ(-_c,p)
≲(h^2 f_L_2(γ) + f- F_h_L_2(γ)
+-_h_H^1_h(γ)+ p-q^ℓ_L_2(γ)
+ h(_h_H^1_h(γ)+_H^1(γ)+p_L_2(γ))) _H^1_h(γ).
We conclude from the inf-sup condition (<ref>)
and the estimates _h_H^1_h(γ)≲ f_h_L_2(Γ_h),
_H^1_h(γ)≲_H^1_h(Γ_h)
that
p-p_h^ℓ_L_2(γ) ≤p-q^ℓ_L_2(γ)+p_h^ℓ-q^ℓ_L_2(γ)
≲p-q^ℓ_L_2(γ)+p_h-q_L_2(Γ_h)
≲ h^2 f_L_2(γ) + f- F_h_L_2(γ)
+-_h_H^1_h(γ)+ p-q^ℓ_L_2(γ)
+ h(_H^1(γ)+p_L_2(γ)+ f_h_L_2(γ)).
By taking the infimum over q∈ Q_h we obtain (<ref>).
In order to obtain a final O(h) energy error bound from (<ref>) we must choose f_h so that f- F_h_L_2(γ)≲ h. A short calculation shows that f_h=_^-1 f yields f- F_h_L_2(γ)≲ h^2; a variety of other choices also yield optimal convergence.
Analysis of L_2 errors in the velocity is the subject of ongoing work.
Numerical experiments presented below indicate that - _h_L_2(Γ_h)≲ h^2, as expected.
However, the conforming approximation error estimate given in Lemma <ref>seems insufficient
to obtain an O(h^2) convergence rate in L_2. In addition, the O(h) geometric error estimate in Lemma <ref> is sufficient to establish optimal O(h) convergence in the energy norm, but not an optimal O(h^2)L_2 convergence rate. Obtaining O(h^2) geometric error estimates sufficient to achieve optimal L_2 convergence is likely possible using techniques introduced in <cit.> but is significantly more technical than the energy case analyzed here.
§ NUMERICAL EXPERIMENTS
In this section we briefly comment on
the implementation of the finite element method (<ref>)
and then present numerical experiments demonstrating optimal convergence rates in the energy and L_2 norms for both MINI and lowest-order Taylor-Hood elements.
§.§ Implementation notes
Above we conceptually consider global degrees of freedom as pairs specified by tangent vectors. It is however necessary to map
and combine individual reference basis functions and degrees of freedom elementwise from the reference element K̂ (cf. Figure <ref>) in order to correctly assemble the system matrix.
We translate vertex degrees of freedom from the reference element to physical elements as follows:
* Given a vertex ẑ_j∈K̂, let ϕ̂_1,j and ϕ̂_2,j be the basis functions dual to the degrees
of freedom in Figure <ref>.
* For each vertex a, specify a master element K_a ∋ a.
* Choose arbitrary unit orthogonal vectors _1,K_a, _2,K_a lying in the plane containing K_a. The global degrees of freedom at a are then _h(a)|_K_a·_i,K_a, i=1,2.
* For each a∈_K, let 1 ≤ j_K ≤ 3 be the local numbering of a in K.
* For each triangle K∈_a, compute _i,j_K,K=_a^K _i,K_a, i=1,2.
* For each K∈_a, find α_i,ℓ,j_K, i,ℓ=1,2, with α_i,1,j_K_F_Kϕ̂_1,j_K(ẑ_j_K) + α_i,2,j_K_F_Kϕ̂_2,j_K(ẑ_j_K)=_i,j_K,K, with _F_K as in (<ref>).
With degrees of freedom as pictured in Figure <ref>, this expression reduces to the linear system
_F_K_i,j_K = _i,j_K,K with _i,j_K =
[ α_i,1,j_K; α_i,2,j_K ].
This system is solved by application of the Moore-Penrose psuedoinverse (_F_K^⊺_F_K)^-1_F_K^⊺.
The coefficients α_i,ℓ, j_K then serve as a “Rosetta stone” (or DOF handler) to translate the individual
reference basis functions ϕ̂_i,j elementwise to global basis functions.
§.§ Numerical results
We take γ to be the ellipsoid given by Ψ(x,y,z):=x^2/1.1^2 + y^2/1.2^2 + z^2/1.3^2=1. The test solution is
= (-z^2, x, y)^⊺; cf. Figure <ref>. Note that = I-⊗ with =∇Ψ/|∇Ψ | on γ, so is componentwise a rational function and not a polynomial. The pressure is p=xy^3+z. The incompressibility condition div_γ=0 does not hold, so the Stokes system must be solved with nonzero divergence constraint. We employed a MATLAB code built on top of the iFEM library <cit.>.
The left plot in Figure <ref> depicts the convergence history for the MINI element on a sequence of uniformly refined meshes. Optimal convergence is clearly observed in both the energy and L_2 norms, in particular O(h) for the energy norm -_h_H_h^1(Γ_h) + p^e -p_h_L_2(Γ_h) along with O(h^2) for the error -_h_L_2(Γ_h). Recall also that the pressure is approximated by affine functions, which can in theory approximate to order h^2 in L_2. Convergence is generally restricted instead to order h because the pressure is coupled to the velocity H^1 norm in the error analysis, but superconvergence of order h^3/2 may occur on sufficiently structured meshes <cit.>. We observe an initial superconvergent decrease of order h^3/2 or higher, but the expected asymptotic rate of order h is eventually seen; cf. <cit.> for discussion of similar phenomena in the Euclidean context.
We also approximated (, p) using a ℙ^2-ℙ^1 Taylor-Hood method. The discrete surface Γ_h was taken to be a quadratic rather than affine approximation to γ in order to obtain a geometric error commensurate with the expected order of convergence for this element. Vertex degrees of freedom were defined as above, additionally taking into account the fact that the surface normal on a piecewise quadratic surface is, in contrast to the case of an affine surface, not elementwise constant. Quadratic Taylor-Hood vector fields have degrees of freedom at edge midpoints in addition to at vertices, and these were defined in a manner completely analogous to the vertex degrees of freedom. The right plot in Figure <ref> exhibits the expected O(h^2) convergence in the energy norm and O(h^3) convergence for the L_2 error in the velocity. This confirms that our methodology has applicability beyond the MINI element; error analysis and extension to other stable Stokes element pairs employing nodal degrees of freedom will be the subject of future work.
§ PROOF OF LEMMA <REF>
We divide the proof of Lemma <ref> into three steps.
Step 1:
For a scalar function q defined on Γ_h, we have the identity <cit.>
_γ (q∘^-1) = ([ I-d H]^-1 [ I - _h⊗/_h·] _Γ_h q)∘^-1 on γ.
Consequently, for = (v_1,v_2,v_3)^⊺∈_T^1(K),
( (∘^-1))_i,: = (_γ (v_i∘^-1))^⊺
= (([ I-d H]^-1 [ I - _h⊗/_h·] _Γ_h v_i)∘^-1)^⊺
= ((_Γ_h v_i)^⊺ [ I - _h ⊗/_h·]^⊺
[ I-d H]^-⊺)∘^-1
= ((_Γ_h v_i)^⊺ [ I - ⊗_h/·_h]
[ I-d H]^-1)∘^-1
= ((_h)_i,: [ I - ⊗_h/·_h]
[ I-d H]^-1)∘^-1.
Thus, we have the identity
(∘^-1) = (_h [ I - ⊗_h/·_h]
[ I-d H]^-1)∘^-1.
Since is tangential, there holds = (_h ) = _h,
because _h is constant on K. Thus,
(∘^-1) = (_Γ_h [ I - ⊗_h/·_h]
[ I-d H]^-1)∘^-1.
Step 2: Write = _ = ( L)∘^-1
with L = μ_h^-1 [ - d H]. We then have by (<ref>),
_γ = = ( L∘^-1)
= L (∘^-1) + L∘^-1
= L(_Γ_h[ I - ⊗_h/·_h]
[ I-d H]^-1)∘^-1+ L∘^-1
= L(_Γ_h[ I - ⊗_h/·_h]
[ I-d H]^-1)∘^-1+ ( L∘^-1) ∘^-1,
where
( L)_i,j = ∑_k=1^3 L_i,k/ x_j v_k i,j=1,2,3.
We conclude, by adding and subtracting terms, that
_γ = (_Γ_h) ∘^-1 + [ L-_h] (_Γ_h)∘^-1[ I - ⊗_h/·_h] [ I - d H]^-1
+ (_Γ_h)∘^-1([ I - ⊗_h/·_h] [ I - d H]^-1-_h)+ ( L∘^-1) ∘^-1.
Using |-_h|≲ h,
|d|≲ h^2, and (<ref>), we have
| L-_h|≲ h and
|[ I- _h⊗/·_h][ I - d H]^-1 - _h|≲ h.
Therefore there holds
| Def_γ-( Def_Γ_h)∘^-1|≲ h |(_Γ_h)∘^-1|
+ | ( L∘^-1) ∘^-1|.
Step 3: In the final step of the proof, we bound
the last term in (<ref>).
Let ^(r) = L_:,r denote the rth column of L.
Then (<ref>) and a short calculation yields
( L∘^-1) ∘^-1 = ∑_r=1^3 ( (^(r)∘^-1) ) v_r∘^-1,
and so, by (<ref>),
( L∘^-1) ∘^-1 = ∑_r=1^3 (^(r)_h [ I-⊗_h/·_h][ I- d H]^-1v_r)∘^-1
= ( L_h [ I-⊗_h/·_h][ I- d H]^-1)∘^-1 .
Taking the derivative of L_i,k = μ^-1_h [_i,k - d H_i,k] yields
L_i,k/ x_j = μ_h^-1(- L_i,kμ_h / x_j + _i,k/ x_j- d/ x_j H_i,k- d H_i,k/ x_j)
= -μ_h^-1( L_i,kμ_h/ x_j +ν_i H_k,j +ν_k H_i,j+ν_j H_i,k+d H_i,k/ x_j ).
Thus by (<ref>) and (<ref>), there holds
L = - μ^-1_h[ ( L)⊗μ_h + ⊗ ( H)+( H)⊗ + (·) H + d H]
= -[ ( L)⊗μ_h + ⊗ ( H)+( H)⊗]+O(h||).
Write μ_h = ·_h (1-d κ_1)(1-d κ_2) = ·_h ( I - d H).
Because _h is constant on K and H=0, there holds (·_h)/ x_k = _h ·/ x_k = ( H_h)_k = ( H(_h-))_k = O(h).
Also by Jacobi's formula and |d| ≲ h^2,
/ x_k( I- d H)
= ( I-d H) tr(( I - d H)^-1/ x_k( I - d H))
= -ν_k tr( H) +O(h^2).
We then conclude using |1-·_h| ≲ h^2 that
μ_h= - (·_h) tr( H) + O(h) = - tr( H) + O(h).
Combining (<ref>)–(<ref>)
yields
L = [ tr( H) ( L)⊗ -⊗ ( H)-( H)⊗]+O(h||).
We apply (<ref>) to (<ref>)
along with the identity
= ^⊺ = 0
and |-_h|≲ h to obtain
| ( L∘^-1) ∘^-1|≲ h|∘^-1|.
Combining this with (<ref>) yields the desired estimate
| Def_γ-( Def_Γ_h)∘^-1|≲ h (|(_Γ_h)∘^-1|+ |∘^-1|).
§ ACKNOWLEDGEMENTS
The authors thank Orsan Kilicer for assistance with numerical computations.
siam
|
http://arxiv.org/abs/2307.02430v1
|
20230705165206
|
Base Layer Efficiency in Scalable Human-Machine Coding
|
[
"Yalda Foroutan",
"Alon Harell",
"Anderson de Andrade",
"Ivan V. Bajić"
] |
eess.IV
|
[
"eess.IV",
"cs.CV"
] |
On the connections between the spatial Lambda-Fleming-Viot model and other processes for analysing geo-referenced genetic data
[
Received ; accepted
==============================================================================================================================
A basic premise in scalable human-machine coding is that the base layer is intended for automated machine analysis and is therefore more compressible than the same content would be for human viewing. Use cases for such coding include video surveillance and traffic monitoring, where the majority of the content will never be seen by humans. Therefore, base layer efficiency is of paramount importance because the system would most frequently operate at the base-layer rate. In this paper, we analyze the coding efficiency of the base layer in a state-of-the-art scalable human-machine image codec, and show that it can be improved. In particular, we demonstrate that gains of 20-40% in BD-Rate compared to the currently best results on object detection and instance segmentation are possible.
Human-machine coding, scalable coding, learning-based compression
§ INTRODUCTION
firstpage
Traditionally, image and video codecs have been developed to minimize the bitrate required to support human viewing of the visual content. Codecs such as JPEG, JPEG2000, and H.26X have become enabling technologies for many multimedia services. Increasingly, however, visual content is also “viewed” by machines for the purpose of automated analysis. Examples include traffic monitoring, visual surveillance, autonomous driving, and others. This has spurred interest in the development of codecs optimized for machine-based analysis <cit.>, which have demonstrated significant bit savings compared to coding for human viewing. Reasons for these bit savings have also been supported by rate-distortion theory <cit.>.
In applications such as traffic monitoring, analysis tasks – such as object detection, tracking, speed estimation – are supposed to run continuously, while human viewing may be necessary on occasion to assess a certain situation of interest, for example an accident. For these applications, codecs would need to support both machine analysis and human viewing. This has inspired human-machine scalable codecs <cit.>, where the base layer supports machine analysis tasks, while the enhancement layer supports human viewing.
Our focus in this paper is on a scalable human-machine image codec <cit.>, which presents state-of-the-art (SOTA) results for the cases of object detection + human viewing and object detection + instance segmentation + human viewing.
In applications that require such scalable coding, the efficiency of the base layer is crucial, because the machine analysis runs continuously while human viewing is only needed occasionally. Our main goal here is to improve the efficiency of the base layer. Our contributions are as follows:
* We show that, despite its good performance, the base layer in <cit.> is inefficient. We explain the source of this inefficiency, and propose a way to optimize the base-layer efficiency in human-machine scalable coding.
* We demonstrate coding gains of up to 20-40% in BD-Rate over <cit.> for the base tasks of object detection and instance segmentation, setting, to our knowledge, new SOTA rate-accuracy results on these tasks.
Section <ref> briefly reviews related work. The proposed method is described in Section <ref>. Experimental results are presented in Section <ref>, followed by conclusions in Section <ref>.
§ RELATED WORK
Although coding for machines can be performed using conventional handcrafted codecs <cit.>, learning-based codecs offer greater flexibility to exploit redundancies encountered in this kind of compression. Learning-based image codecs have come a long way recently <cit.>, rivaling the performance of the best handcrafted codecs, especially on perceptual metrics like Structural Similarity Index Metric (SSIM). The general processing chain of recent learned image codecs can be described as X g_a→Y→Yg_s→X, where X is the input image, g_a is a learnable analysis transform, Y is the latent-space representation of the input image, which is quantized to Y, g_s is a learnable synthesis transform, and X is the approximation to the input image. Upon deployment, and during our evaluation, the quantized latent representation Y is entropy-coded and decoded. Such codecs are trained end-to-end using a Lagrangian loss function:
ℒ = R + λ· D,
where R is the total rate estimate for
Y and any side information needed for its encoding and decoding,
and D is the distortion between X and X. The scalar λ
can be adjusted to achieve the desired balance between compression efficiency and reconstructed image quality.
Recently, the concept of latent-space scalability <cit.> was proposed for scalable human-machine coding. Here, the latent representation is partitioned into base and enhancement portions, Y = {Y_base, Y_enh}, such that Y_base feeds the machine analysis task T, while both Y_base and Y_enh are used for input reconstruction, as shown in Fig. <ref>. Such a system can be trained end-to-end using a generic loss function:
ℒ = R_base + R_enh + λ_base· D_base + λ_enh· D_enh,
where R_i are the rate estimates for layer i∈{base,enh}, including any side information, and D_i are distortions related to the specific layer. In <cit.>, two systems were demonstrated based on this concept: a 2-layer system whose base layer supports object detection using YOLOv3 <cit.>, and a 3-layer system whose base layer supports object detection using Faster R-CNN <cit.>, together with the second base (or first enhancement) layer that supports instance segmentation using Mask R-CNN <cit.>. In each case, state-of-the-art rate-accuracy results were demonstrated on the machine task, while remaining comparable to the state-of-the-art image codecs for human viewing as the enhancement task. However, despite its good performance, the base layer of <cit.> turns out to be inefficient. In the next section, we explain the source of this inefficiency and present a way to improve it.
§ PROPOSED METHODS
The idea behind using the loss function (<ref>) to optimize the codec in <cit.> was that the base task-relevant information will be steered towards Y_base because distortion D_base depends only on Y_base and not Y_enh. While this is true, it is only part of the story. To appreciate what happens during training, consider the following simplified[This is a simplified scenario because, in reality, features are being created and placed into Y_base or Y_enh simultaneously, whereas here, for simplicity, we assume that features are created first, then their placement is decided.] scenario: imagine that a feature f has already been created and the goal of training is to decide whether to place it in Y_base or Y_enh based on the loss function (<ref>). Consider two extreme cases:
Case 1: feature f carries base task-relevant information; note that such a feature also carries information relevant to the enhancement task of input reconstruction. If such a feature is placed into Y_base, then R_base will increase and both D_base and D_enh will decrease, as the feature will be used for both tasks. On the other hand, if such a feature is placed into Y_enh, then R_enh will increase, but only D_enh will decrease, because the feature will not be used for the base task. Hence, base task-relevant features are encouraged to end up in Y_base.
Case 2: feature f carries no base task-relevant information, only enhancement-relevant information. If such a feature is placed into Y_base, then R_base will increase and D_enh will decrease, because both Y_base and Y_enh are used for the enhancement task. Similarly, if such a feature is placed into Y_enh, then R_enh will increase and D_enh will decrease, because both Y_base and Y_enh are used for the enhancement task. Therefore, based on (<ref>), there is no preference for placing f in either Y_base or Y_enh, and such a feature may end up anywhere in the latent space.
Based on the analysis above, we conclude that features that are not relevant to the base task (case 2) may still end up in Y_base. In other words, the base layer of the codecs presented in <cit.> is likely inefficient, containing more information than strictly necessary for the base task. In the next section, we describe how base layer efficiency can be improved.
§.§ Base layer
As our framework is based on <cit.>, we consider three machine analysis tasks as our base layer tasks: object detection using YOLOv3, object detection using Faster R-CNN, and instance segmentation using Mask R-CNN. Our results demonstrate that we can improve the rate-accuracy efficiency on these tasks by 20-40% in BD-Rate.
As noted in the previous section, the source of inefficiency in the base layer is the task-irrelevant information, which can end up in Y_base when base and enhancement layers are trained in parallel, as in <cit.>. For this reason, we employ sequential training, where the base layer is trained first, then frozen, followed by training of the enhancement layer. Conceptually, the base layer processing chain can be described as X g_b→Y_base→Y_baseg_t→ T, where g_b is the base analysis transform, and g_t accomplishes the task T from Y_base. The key to making an efficient base layer is to realize an Information Bottleneck (IB) <cit.> for task T:
min_p(
y_base | x) I(X; Y_base) - β· I(Y_base ; T),
where I(·;·) is the mutual information <cit.>, p(
y_base | x) is the mapping from the input image to the base-layer latent representation, and β>0 is the IB Lagrange multiplier <cit.>.
In our case,
p(
y_base | x)
is g_b followed by quantization. Hence, when input X is given, Y_base is fully determined, so we have I(X; Y_base) = H(Y_base) - H(Y_base | X) = H(Y_base), because H(Y_base | X) = 0. Here, H(·) and H(· | ·) are the entropy and conditional entropy <cit.>, respectively. Moreover, since decreasing -β· I(Y_base ; T) (i.e., increasing I(Y_base ; T)) is supposed to improve the task accuracy, we take λ_base· D_base as the proxy for -β· I(Y_base ; T). Therefore, in our case, the IB (<ref>) for task T becomes:
min_g_b,g_t H( Y_base) + λ_base· D_base,
showing that it can be solved using a loss function analogous to (<ref>). We employ <cit.> to realize g_b and perform entropy estimation, and the Latent Space Transform (LST) from <cit.> to map Y_base to the task network features, where we use MSE as our distortion target during training. The number of channels in Y_base is set depending on the task, as described in the experiments.
§.§ Enhancement layer
After training the base layer, we freeze it, and construct a “preview transform” that is meant to recover an approximation X_pre
of the input image X from base features Y_base, as shown in Fig. <ref>. This transform consists of a 1 × 1 convolutional layer to adjust the number of channels, followed by a synthesis transform g_s from <cit.> to approximate the input image X. Next, we subtract the preview image X_pre from the input X, resulting in a residual image X_res. This residual image is then encoded by <cit.>, which was fine-tuned for our setting using an MSE loss. Finally, we obtain the reconstructed input X by adding X_pre to the reconstructed residual X_res. The dashed line in Fig. <ref> indicates that during
training of the enhancement layer, the base layer is frozen.
§ EXPERIMENTS
§.§ Base layer
As explained above, we begin by training and evaluating the base layer without the enhancement part. We do this using several tasks and models following <cit.>: object detection using YOLOv3 <cit.> and Faster R-CNN <cit.>, and instance segmentation using Mask R-CNN <cit.>. All base-layer experiments share the following two-stage training approach. Input images are random patches of size 256 × 256 from the JPEG-AI <cit.> and CLIC <cit.> datasets in stage one and VIMEO-90K <cit.> in stage two, with a mini-batch size of 16. The first stage uses the Adam optimizer with a fixed learning rate of 10^-4 for 500 epochs, followed by another 400 epochs with a polynomial decay of the learning rate every 10 epochs. The Lagrange multipliers as well as the number of base channels L_base for each task are shown in Table <ref>.
Task performance is then evaluated following the same procedure as in the 3-layer network described in <cit.>, using the
COCO2017/COCO2014 validation set <cit.>, and Detectron2 <cit.>. We maintain the same formulation for the mean average precision (mAP) as in <cit.> and compare the performance of our proposed base layer with <cit.> (which we refer to as Choi2022) alongside other benchmarks in Fig. <ref>-<ref>.
Observing the three figures, it is evident that our base layer outperforms the previous SOTA by a significant margin. For example, Fig. <ref>, shows that using Faster R-CNN, <cit.> experiences a 1.3% drop in mAP at 0.1 bits per pixel (BPP), while our base layer only drops by 0.56%. Similarly, when using YOLOv3
in Fig. <ref>, <cit.> suffers a degradation of 1.15% even at a relatively high bit-rate of over 0.7 BPP, while our approach remains at less than 1% reduction in mAP until approximately 0.5 BPP. Lastly, we see that such improvement persist even in the more complex task of instance segmentation, seen in Fig. <ref>.
To summarize the difference in performance between our proposed method and previous SOTA, we employ the Bjøntegaard Delta (BD) metric <cit.> for rate difference at equivalent
accuracy (BD-Rate). We see significant savings in all three experiments, which are summarized in Table <ref>.
§.§ Enhancement layer
Once the base-layer training is complete, we proceed to
train the enhancement layer. Because of the decoupling between the training of our base and enhancement layers, we can theoretically match various levels of image reconstruction to each of our base-layer models. However, in order to create a fair comparison with previous models, where base and enhancement performance are inherently linked, we match the relative qualities of our two layers.
In training the enhancement layer, we freeze our
base layer and initialize the
residual using a pre-trained model <cit.> (Cheng2020 in figures) of varying quality index, provided by CompressAI <cit.>. For the first three base qualities, we choose quality index 1, while for higher base qualities, we assign quality 3 from <cit.>. It is worth noting that both quality levels contain 128 channels.
The enhancement layer is trained for 300 epochs using random patches of 256 × 256 from the JPEG-AI <cit.> and CLIC <cit.> datasets, with a mini-batch size of 16, the Adam optimization algorithm with a constant learning rate of 10^-4 is used. The values of the λ_enh are similar to the λ_enh used in <cit.>. The performance of the enhancement layer is evaluated on Kodak <cit.> dataset, which consists of 24 uncompressed images. The results of the input reconstruction are shown in terms of PSNR vs. BPP in Fig. <ref>. Table <ref> summarizes the BD-Rate results using Cheng2020 as the benchmark, including Choi2022
and our proposed approach.
As could perhaps have been expected, the improvement of our base layer came at the cost of some degradation to the enhancement-layer performance. We do see significant increase in bitrate when compared with
Cheng2020, as well as the
scalable approach of Choi2022 <cit.>. Fortunately, however, when comparing our enhancement layer alone, we still see rate savings compared to re-transmitting the full image, with BD-Rate improvement (compared to <cit.>) ranging from 7-17%. In a practical setting, the choice between two approaches will depend on the relative frequency of the use of human vision compared to machine vision, denoted f_T. Using this, the relative rate of our approach and a human-vision encoder is given by:
(1-f_T) ·R_base/R + f_T·R_base+R_enh/R,
where R is the bitrate of the single-layer encoder. Whenever the relative rate is smaller than 1, our approach will be preferable. Using BD-Rate estimates to approximate the ratios in Eq. <ref>, we see, that our models achieves overall rate savings so long as human viewing is used less than 59%, 34%, and 17%, for YOLOv3, Faster R-CNN, and Mask R-CNN, respectively.
§ CONCLUSIONS
We have explained and demonstrated an inefficiency in the base layer in the previous state-of-the-art human-machine scalable image codec. To mitigate this problem, we proposed an improved base-layer training procedure achieving significant rate savings on multiple tasks. We then added a residual-based enhancement layer for input reconstruction. As expected, we saw some degradation in rate-distortion for the enhancement layer, which we believe can be reduced through further optimization of the enhancement layer. Nevertheless, in scenarios where machine analysis is needed more frequently than human viewing, our proposed method outperforms relevant single-layer learned image codecs.
IEEEbib-abbrev
|
http://arxiv.org/abs/2307.02054v2
|
20230705063852
|
Emoji Prediction using Transformer Models
|
[
"Muhammad Osama Nusrat",
"Zeeshan Habib",
"Mehreen Alam",
"Saad Ahmed Jamal"
] |
cs.CL
|
[
"cs.CL",
"cs.AI"
] |
Emoji Prediction using Transformer Models
1st Muhammad Osama Nusrat
Dept of Computing
Fast Nuces
Islamabad, Pakistan
[email protected]
2nd Zeeshan Habib
Dept of Computing
Fast Nuces
Islamabad, Pakistan
[email protected]
2nd Mehreen Alam
Dept of Computing
Fast Nuces
Islamabad, Pakistan
[email protected]
2nd Saad Ahmed Jamal
Department of GeoInformatics ZGIS
University of Salzburg
Salzburg, Austria
[email protected]
==========================================================================================================================================================================================================================================================================================================================================================================================================================================================================
In recent years, the use of emojis in social media has increased dramatically, making them an important element in understanding online communication. However, predicting the meaning of emojis in a given text is a challenging task due to their ambiguous nature. In this study, we propose a transformer-based approach for emoji prediction using BERT, a widely-used pre-trained language model. We fine-tuned BERT on a large corpus of text(tweets) containing both text (tweets) and emojis to predict the most appropriate emoji for a given text. Our experimental results demonstrate that our approach outperforms several state-of-the-art models in predicting emojis with an accuracy of over 75 percent. This work has potential applications in natural language processing, sentiment analysis, and social media marketing.
§ INTRODUCTION
In the past few years, social media has emerged as a prolific source of data for numerous research fields, with natural language processing (NLP) being one of them. As a result of the widespread use of mobile devices and the internet, social media platforms such as Twitter have become a popular means for people to express their emotions, opinions, and sentiments on various topics. In this context, emojis have become a popular way of conveying emotions and sentiments in text-based communication. Emojis are small pictograms that represent emotions, objects, or concepts and are widely used on social media platforms.
Emoji prediction is a task that involves predicting the most appropriate emoji to use in a given textual conversation based on the context of the conversation. This task is essential in improving the effectiveness of communication on social media platforms, especially in situations where the text is ambiguous, and the use of emojis can add clarity to the message.
To address the challenge of emoji prediction, recent studies have explored the use of transformer models, particularly the Bidirectional Encoder Representations from Transformers (BERT) model. BERT is a powerful pre-trained transformer model that has shown state-of-the-art performance in a wide range of natural language processing tasks.
The use of BERT in emoji prediction involves fine-tuning the model on a large dataset of tweets or other social media posts to learn the contextual relationships between the text and the appropriate emojis. The fine-tuned model can then be used to predict the most appropriate emoji to use in a given context.
Despite the promising results reported by recent studies on emoji prediction using transformer models, there are still some challenges that need to be addressed. One of the challenges is the lack of large, diverse datasets for training and evaluating the models. Another challenge is the diversity of emojis used in different languages and cultures, which requires the development of language-specific and culture-specific models.
In this context, this study explores the use of BERT for emoji prediction in a dataset of tweets. We fine-tune the BERT model on a large dataset of tweets and evaluate its performance on a test set of tweets. We also examine the impact of different factors, such as the size of the training data and the number of emojis, on the performance of the model. The findings of this study can provide insights into the effectiveness of transformer models for emoji prediction and can contribute to the development of more accurate and efficient emoji prediction models for social media platforms.
§ LITERATURE REVIEW
In [1], the authors present a groundbreaking approach to pre-train language models that that has since become one of the most influential contributions to NLP in recent years. They proposed an approach called BERT, or Bidirectional Encoder representations from transformers is a deep learning architecture that uses a bidirectional transformer network to pre-train a language model on a large amount of unlabelled datasets. The model is then fine-tuned on some NLP tasks like text classification or question answering. They have described their approach to pretraining BERT, including the use of a novel masked language modeling objective that randomly masks tokens in the input sequence and then model predict the masked tokens based on the surrounding context. This objective allows BERT to capture both local and global context in the input sequence, resulting in a highly contextualized representation of language. The authors also describe their use of a next-sentence prediction objective, which helps BERT capture the relationship between two sentences in a document.
In [2] the author argue that language models, which are traditionally trained to predict the next word in a sentence or the likelihood of a sentence given a context, can be viewed as multitask learners that can perform a variety of tasks without explicit supervision. They propose a method for training language models on a diverse set of tasks, including sentiment analysis, question answering, and language translation, without any labelled data. The model is trained on a new dataset of millions of webpages called WebText. The approach [2], called Unsupervised Multi-task Learning (UMT), manipulating the vast amounts of unannotated text available on the internet to train a single neural network on multiple tasks simultaneously. By sharing the parameters across tasks, the model is able to learn from the common underlying structure of language and perform well on a range of tasks. The authors [2] also introduce a new benchmark, called the General Language Understanding Evaluation (GLUE), which measures the performance of language models on a suite of diverse NLP tasks. Using UMT, they achieve state-of-the-art results on the GLUE benchmark, outperforming previous approaches that relied on supervised learning.
In [3] authors proposed a new approach to enhance the zero-shot learning ability of language models by combining the pre-training and fine-tuning paradigm with prompting. Their method involves fine-tuning a pre-trained model with 137 billion parameters on a range of datasets described through instructions. By evaluating the model's performance on previously unseen tasks, the authors demonstrated that their instruction-tuned model, FLAN (Finetuned Language Net), outperformed its untuned counterpart by a significant margin in a zero-shot setting. Additionally, FLAN surpassed GPT-3 in zero-shot performance on 20 out of 25 datasets evaluated, indicating its superior performance.
Felbo et al. (2017) [4] proposed a novel sentiment, emotion, and sarcasm detection approach using millions of emoji occurrences as a weakly supervised learning signal. The authors introduce the DeepMoji model, a deep learning architecture based on long short-term memory (LSTM) networks. The model is pre-trained on a large dataset containing 1.2 billion tweets with emoji, allowing it to learn semantic representations of text from these noisy labels. This pre-training approach helps learn effective representations for downstream tasks such as sentiment analysis, emotion recognition, and sarcasm detection. The DeepMoji model demonstrates state-of-the-art performance on several benchmarks, outperforming existing methods. This work highlights the potential of using emojis as a weak supervision signal to learn domain-agnostic representations that can be effectively used for various natural language processing tasks.
Ma et al. (2020) [5] build upon the work of Felbo et al. (2017) by exploring the problem of emoji prediction more comprehensively. The authors introduce several extensions to the DeepMoji model, such as incorporating attention mechanisms, leveraging tweet metadata, and utilizing pre-trained language models like BERT. The authors also present a new benchmark dataset called "EmoBank," which is collected from Twitter and contains 4.7 million tweets with emoji. EmoBank is designed to evaluate models on various emoji prediction tasks, such as predicting the presence, absence, and type of emojis in a given text. The extended model shows improved performance compared to the original DeepMoji model and other baselines, demonstrating the effectiveness of the proposed extensions.
Vaswani et al. (2017) [6] propose a novel neural network architecture called the Transformer, which relies solely on self-attention mechanisms, discarding the need for traditional recurrent or convolutional layers. The authors argue that attention mechanisms can model long-range dependencies and parallelize computation more effectively than LSTMs or CNNs, thus addressing some of the limitations of these traditional architectures. The Transformer model achieves state-of-the-art results on various natural language processing tasks, including machine translation and language modeling. This work has significantly impacted the field, inspiring a range of follow-up research and developing powerful pre-trained language models such as BERT and GPT-2.
Tom Brown et al [7] introduced a new language model called GPT 3 which was an advancement to GPT 2 as it solved some of the problems which were addressed in the previous language model. GPT-2 required a lot of fine tuning to do a specific task. GPT-3 solved this problem as it does not require a lot of fine tuning to do a particular task such as in language translation, GPT 3 can translate a sentence from one language to another with just a few samples whereas in GPT-2 we had to provide relatively more samples so that the model perform well. Similarly GPT-3 outperforms GPT-2 in question answering, filling missing words in a sentence, using new words in the sentence which are not present in the vocabulary, doing calculations and many other tasks. We can confidently say GPT-3 is a better short learner than GPT-2. By few shot we mean the ability to learn with few examples. The reason for the success of GPT-3 is that it has more parameters than GPT-2. GPT-3 contains 175 billion massive parameters compared to GPT-2 which has only 1.5 billion parameters. GPT-3 has brought ease in many NLP domains where we had very less labeled data and it was nearly impossible to get results with such small labeled examples. GPT-3 has made it solvable now. For e.g we can build a chatbot for a travel agency with very few examples with GPT-3 whereas previously, we required huge amounts of labeled data to do the same task. The author also highlighted some limitations of the GPT-3 model which included that GPT-3 does not understand the context of the document properly for e.g if we ask it to write a summary of a scientific paper it will fail to capture all important points and write a summary. Moreover if we ask it to generate a response for a complaint it may output a random response which may not address the user's problem. GPT-3 also has another limitation as it generates biased outputs because it is trained on a data which is more male biased. For e.g it may write negatively about woman such as woman are not suitable for leadership positions and woman cannot drive safe etc which is not okay.
Thomas Wolf et al. [9] discussed how transformers have revolutionized natural language processing tasks. Transformers have enabled machines to generate human-like content. Transformer architecture was introduced in 2017 in the famous paper ATTENTION IS ALL YOU NEED. It solved all the previous issues addressed in RNN, such as bottleneck problems and long-range dependency issue problems. RNNs cannot capture information when sentences are long due to vanishing gradient issues. Transformer solved this problem because it is based on a self-attention mechanism focusing on the sentence's essential parts. Moreover, the transformer uses multi-head attention, which means it consists of multiple attention mechanism which helps to focus on multiple parts of the input sentence in parallel. Each head focuses on different parts of the input sentence. One head can focus on the sentence's subject, the other on an object, and the third on the object. In multi-head attention, instead of a single context vector, multiple context vectors are generated, which contain the input sentence information, which results in a better performance than when we use a single attention mechanism. Moreover, transformers are faster than recurrent neural networks as they can handle parallel processing. We can use transformers to do many tasks by fine-tuning them on small datasets, as it has been trained on large datasets. Transformers use an attention mechanism which makes it great for summarizing articles and research papers because it focuses on essential parts of the documents and then gathers all those critical points to generate a summary. The author then introduced a library called TRANSFORMERS which is an open-source library that students and scientists can use to do NLP tasks more efficiently primary purpose of building this library was to make it easy for people instead of writing code from scratch they can use this library which will save their time and energy. You can use this library to do multiple nlp tasks like sentiment analysis. Text classification, question answering, and language generation. The library contains many pre-trained models, such as BERT. GPT-2. RoBERTa, DistilBERT, T5 etc. These models have been trained on a massive amount of text data, such as Wikipedia, and these pre-trained models can then be fine-tuned to our task means we will require less training time and fewer data instead of training from scratch.
§ METHODOLOGY
§.§ Dataset
The dataset used was small, containing 132 rows for training and 56 rows in the test CSV file. There are 5 emoji classes in the dataset.
§.§ Pipeline
* Dataset Collection
* Preprocessing
* Tokenization
* Finetuning
* Evaluation
* Inference
§.§ Approach
§.§.§ BERT
BERT is a highly advanced pre-trained language model developed by Google that uses a bidirectional approach and deep neural network to better understand natural language by analyzing the entire input sequence in both directions during training, leading to more accurate language processing and understanding. It has been widely used in various natural language processing tasks, improving the accuracy and effectiveness of NLP applications and inspiring the development of other advanced pre-trained language models.
BERT has several advantages, making it a popular choice for natural language processing tasks. Firstly, it uses a bidirectional approach during training, which allows it to better understand the context of words in a sentence. This can lead to more accurate language processing and understanding than other language models that only process text in one direction.
Secondly, BERT has been pre-trained on a large corpus of text data, allowing it to capture many language patterns and nuances. This makes it highly effective for various NLP tasks, such as question answering, sentiment analysis, and language translation.
Finally, BERT has inspired the development of other advanced pre-trained language models, such as GPT-3 and RoBERTa. These models build on BERT's success and improve its architecture, resulting in even better performance in natural language processing tasks.
§.§ Evaluation Metric
We have used precision, accuracy, recall, and F1 score as our evaluation metric.
§ RESULTS & DISUCSSION
Fig 2 shows that the model correctly predicts the emojis corresponding to the tweets for dataset .
Similarly, we can see that the precision, recall, F1 score, and accuracy are 0.7599, 0.75, 0.7498, and 0.75, respectively.
The testing accuracy was 0.9722, as shown in Fig 3.
The model has been trained for 10 epochs for dataset.
The training and validation accuracy increased with the number of epochs, as shown in Fig 4.
Moreover, the training and validation loss decreases as the number of epochs increases, as illustrated in Fig 5.
§ CONCLUSION
In conclusion, our study demonstrates the effectiveness of using BERT for emoji prediction on a dataset of tweets. We found that fine-tuning a pre-trained BERT model on a dataset of labeled tweets can achieve state-of-the-art results on this task. Our experiments show that a BERT-based approach outperforms traditional machine learning models and other deep learning models. Additionally, our study highlights the importance of pre-processing techniques such as tokenization and stemming for improving model performance. Furthermore, we found that using tweet-specific features such as hashtags and user mentions as input features can further improve model performance. Our results suggest that BERT can be a valuable tool for predicting emojis in tweets, which can be useful for a variety of applications such as sentiment analysis and social media monitoring
00
b1 Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
b2 Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI blog, 1(8), 9.
b3 Wei, J., Bosma, M., Zhao, V. Y., Guu, K., Yu, A. W., Lester, B., ... & Le, Q. V. (2021). Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652.
b4 Felbo, B., Mislove, A., Søgaard, A., Rahwan, I., & Lehmann, S. (2017). Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. arXiv preprint arXiv:1708.00524.
b5 Ma, W., Liu, R., Wang, L., & Vosoughi, S. (2020). Emoji prediction: Extensions and benchmarking. arXiv preprint arXiv:2007.07389.
b6 Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.
b7 Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners. Advances in neural information processing systems, 33, 1877-1901.
b8 https://www.kaggle.com/datasets/hariharasudhanas/twitter-emoji-prediction
b9 Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., ... & Rush, A. M. (2019). Huggingface's transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771.
§ DATASET AND CODE AVAILABILITY
The dataset was used from the following kaggle link
https://www.kaggle.com/datasets/alvinrindra/emojify
The developed code is made available at Github: https://github.com/mnusrat786/emoji-prediction-with-transformer
|
http://arxiv.org/abs/2307.03148v2
|
20230706172352
|
On the Computation of Accessibility Provided by Shared Mobility
|
[
"Severin Diepolder",
"Andrea Araldo",
"Tarek Chouaki",
"Santa Maiti",
"Sebastian Hörl",
"Constantinos Antoniou"
] |
cs.CY
|
[
"cs.CY",
"cs.NA",
"math.NA",
"J.2"
] |
[
Vittorio De Falco^1,2
August 1, 2023
=========================
§ SHORT SUMMARY
Shared Mobility Services (SMS), e.g., Demand-Responsive Transit (DRT) or ride-sharing, can improve mobility in low-density areas, often poorly served by conventional Public Transport (PT). Such improvement is mostly quantified via basic performance indicators, like wait or travel time. However, accessibility indicators, measuring the ease of reaching surrounding opportunities (e.g., jobs, schools, shops, ...), would be a more comprehensive indicator. To date, no method exists to quantify the accessibility of SMS based on empirical measurements. Indeed, accessibility is generally computed on graph representations of PT networks, but SMS are dynamic and do not follow a predefined network. We propose a spatial-temporal statistical method that takes as input observed trips of a SMS acting as a feeder for PT and summarized such trips in a graph. On such a graph, we compute classic accessibility indicators. We apply our method to a MATSim simulation study concerning DRT in Paris-Saclay.
Keywords: Accessibility; Public Transport; Shared Mobility;
§ INTRODUCTION
Location-based accessibility measures the ease of reaching surrounding opportunities via transport (<cit.>). Accessibility provided by conventional PT is generally poor in low-demand areas, e.g., suburbs (<cit.>), because a high frequency and high coverage service in such areas would imply an unaffordable cost per passenger. Poor PT accessibility in the suburbs makes them car-dependent, which prevents urban regions from being sustainable (<cit.>).
SMS, e.g., Demand-Responsive Transit (DRT), ride-sharing, carpooling, car-sharing, are potentially more efficient than conventional PT in the suburbs (<cit.>). However, their current deployment is commonly led by private companies targeting profit maximization. This may turn SMS into additional source of congestion and pollution (<cit.>).
We believe that SMS deployment should be overseen by transport authorities under the logic of accessibility improvement. To this aim, a method is needed, able to compute impact of SMS on accessibility, based on empirically observed trips. To the best of our knowledge, this paper is the first to propose such a method. <cit.> study how DRT improves connection to conventioal PT stops, without considering the impact on accessing opportunities.
<cit.> and <cit.> calculate accessibility from Autonomous Mobility on Demand, based on utilities perceived by agents within simulation. By contrast, our method computes accessibility solely based on observed SMS trip times, either from the real world or simulation. A first attempt of integrating SMS into the graph-based description of PT is done by <cit.>. However, they use analytic models to model SMS performance and thus fail to give real insights adapted to the areas under study. Our effort consists instead of estimating accessibility from empirical observations via spatial-temporal statistics.
General Transit Feed System (GTFS) is the standard data format for PT schedules. Recently, the GTFS-Flex extension allows also describing
SMS (<cit.>). Although uur estimates could thus be fed into GTFS-Flex data, for the sake of simplicity, we use plain GTFS instead.
Our contribution consists in developing a spatial-temporal statistical pipeline to transform SMS trip data observations in a graph representation, on top of which well-established accessibility computation can be performed.
The observations that can be taken as input might come from real measurements or from simulation. This paper's
observations come from a MATSim simulation study of DRT deployment in Paris Saclay, from <cit.>.
By providing a first method to compute the accessibility of SMS on empirical observations, this work can contribute to a better understanding of the potential of SMS and guide their future deployment.
§ METHODOLOGY
§.§ Accessibility
As in (<cit.>), the study area is tessellated in hexagons with a grid step of 1km, whose centers 𝐮∈ℝ^2 are called centroids and denoted with set C⊆ℝ^2. Each hexagon contains a certain quantity of opportunities, e.g., jobs, places at school, people. With O_𝐮 we denote the opportunities in the hexagon around 𝐮 and with T(𝐮,𝐮', t) the time it takes to arrive in 𝐮', when departing from 𝐮 at time t.
As in <cit.>, accessibility is the amount of opportunities that one can reach departing from 𝐮 at time of day t within time τ:
acc(𝐮) ≡∑_𝐮'∈C(𝐮,t) O_𝐮'.
C(𝐮,t)={𝐮∈C | T(𝐮,𝐮', t)≤τ} is the set of centroids reachable within τ. By improving pt, such set can be enlarged such as to consent to reach more opportunities. In this work, the opportunities are the number of people (residents) that can be reached.
T(𝐮,𝐮', t) is always computed on a graph representation of the transport network. However, SMS are not based on any network. Our effort is thus to build a graph representation of SMS, despite the absence of a network model.
§.§ Time-Expanded Graph Model of conventional pt
Inspired by <cit.> and <cit.>, we model pt as a time-expanded graph G, compatible with the GTFS format. The nodes of G are stoptimes. Stoptime (𝐬,t) indicates the arrival of a pt vehicle at stop 𝐬∈ℝ^2 (modeled as a point in the plane) at time t∈ℝ. Different trips on a certain line are represented as sequences of different stoptimes, as in Figure <ref>, as well as potential line change, within 15 minutes walk, assuming 5 Km/h walk speed, if it is possible to arrive at the new line on time. When a user departs at time t_0 from location 𝐱 for location 𝐱', they can simply walk (but no more than the maximum walk time). Or they can walk to 𝐬, board a PT vehicle at t (corresponding to a stoptime (𝐬,t), use pt up to a stoptime (𝐬',t') and from there walk to 𝐱'. The arrival time
at 𝐱' will be t' plus the time for walking.
Users are assumed to always choose the path with the earliest arrival time. Path computation is performed within CityChrone (<cit.>). No capacity constraints are considered.
§.§ Integration of shared mobility into the time-expanded graph
SMS is assumed to provide a feeder service to traditional pt. In a feeder area F(𝐬)⊆ℝ^2 around some selected stops 𝐬 (which we also call hubs), SMS provide connection to and from 𝐬. The set of centroids in such an area is C(𝐬)=C∩F(𝐬).
In this section, we will focus on access trips (from a location to a PT stop) performed via SMS. The same reasoning applies to
egress trips, mutatis mutandis.
We assume to have a set O of observations. Each observation i∈O corresponds to an access trip and contains:
* Time of day t_i∈ℝ when the user requested a trip to the flexible service
* Location 𝐱_i∈ℝ^2 where the user is at time t_i
* Station s_i where the user wants to arrive via the SMS feeder service
* Duration w_i indicating the wait time before the user is served: it can be the time passed between the time of request and the time of pickup from a vehicle, in case of ride-sharing, drt or carpooling; it can be the time to wait until a vehicle is available at the docks in a car-sharing or bike-sharing system.
* Travel time y_i: time spent in the SMS vehicle to arrive at 𝐬.
We interpret y_i and w_i as realizations of spatial-temporal random fields (<cit.>): for any time of day t∈ℝ and physical location 𝐱∈F(𝐬), random variables W^𝐬(𝐱,t), Y^𝐬(𝐱,t) represent the times experienced by a user appearing in t and 𝐱, for any stop 𝐬. In the following subsection we will compute estimations ŵ^𝐬(𝐮,t), ŷ^𝐬(𝐮,t) of expected values 𝔼[W^𝐬(𝐮,t)],𝔼[Y^𝐬(𝐮,t)] at centroids 𝐮∈C(𝐬).
To integrate SMS into pt graph G, SMS are represented as a set of “virtual” trips, running between centroid 𝐮∈C(𝐬) and hub 𝐬 (Figure <ref>).
Each trip has travel time ŷ^𝐬(𝐮,t). The access connection between centroid 𝐮 and hub 𝐬 is modeled as a sequence of trips, corresponding to stop times (𝐮, t_j), for different values of departure time. We thus have to compute the list of such departure times. To do so, we interpret the inter-departure time between sich trips as a random field H^𝐬(𝐱,t), which represent a “virtual” headway. The value of such an interval in 𝐱 and t is also a spatial-temporal random field. We use the common approximation ((2.4.28) from <cit.> and related assumptions): H^𝐬(𝐱,t)=2· ^𝐬(𝐱,t).
Therefore, we separate stoptimes by 2· w^𝐬(𝐮,t). More precisely, the stoptimes corresponding to access trips departing from centroid 𝐮 to hub 𝐬 are:
(𝐮,t_0),
(𝐮,t_j)
where t_j= t_j-1+ 2·ŵ^𝐬(𝐮,t_j-1)
for j=1,2, until 11:59 pm,
(𝐮,t_j)
where t_j = t_j+1-2·ŵ^𝐬(𝐮,t_j+1)
for j=-1,-2, until 00:00 am.
Correspondent stoptimes are added to represent the arrival of access trips (𝐬,t_j+ŷ^𝐬(𝐮,t_j)) and an edge between each departure stoptime and the respective arrival stoptime is added. A similar process is applied for egress trips. At the end of the described process, time-expanded graph G is enriched with stoptimes and edges representing SMS trips. Having done so, it is possible to reuse accessibility calculation methods for time-expanded graphs, such as CityChrone <cit.>, with no modifications required.
§.§ Estimation of Waiting and Travel Times
We now explain how we construct estimation ŵ^𝐬(𝐮,t) used in the previous subsection, for access SMS trips only. Similar reasoning can be applied to ŷ^𝐬(𝐮,t) and egress trips. We assume random field W^𝐬(𝐱,t) is approximately temporally stationary within each timeslot:
W^𝐬(𝐱,t)=W^𝐬(𝐱,t_k),
∀𝐱∈ℝ^2, ∀t∈[t_k, t_k+1[, ∀ station 𝐬
For any timeslot, we thus just need to find estimation ŵ_t_k^𝐬(𝐮) of the expected values of random field W_t_k^𝐬(𝐱) ≡ W^𝐬(𝐱,t_k).
First the observations O are projected onto time-slot [t_k, t_k+1]:
O_t_k^𝐬 ≡{observation i=(𝐱_i,w_i,y_i) | i∈O, t∈[t_k,t_k+1[,
i is related to an access trip to 𝐬}
Estimation ŵ_t_k^𝐬(𝐮) is computed by Ordinary Kriging ( <cit.>) on the observations O_t_k^𝐬 as a convex combination of observations w_i:
ŵ_t_k^𝐬(𝐱) = ∑_i∈O_t_k^𝐬λ_i· w_i
In short (details can be found in Section 19.4 of <cit.>), coefficients λ_i are computed based on a semivariogram function γ_t_k^𝐬(d), which obtained as a linear regression model, with predictors d_i,j (distances between all pair of observations) and labels γ_i,j, which are called experimental seminariances:
γ_i,j≡1/2· (w_i-d_j)^2
The underlying assumption here is that correlation between wait times in different locations vanishes with the distance between such locations. The semivariogram gives the “shape” of this vanishing slope. In estimation (<ref>), closer observations will have a higher weight. Under hypothesis on spatial stationarity and uniformity in all directions (<cit.>), Theorem 2.3 of <cit.> proves that Kriging gives an aymptotically biased estimator: as the number of observations tends to infinite, ŵ_t_k^𝐬(𝐱) tends to the “true” 𝔼[W_t_k^𝐬(𝐱)].
Note that, by means of interpolation on a limited set of observed trips, the method described here is meant to infer the potential to access opportunities, also via trips that may not have observed yet.
§ IMPLEMENTATION
The methodology of Section <ref> is implemented in a Python pipeline, which we release as open source (<cit.>) and is depicted in Figure <ref>.
* We first get centroids and cells performing the tessellation via CityChrone.
* We read the file containing the observations (SMS trips). Such a file can be a simulation output or measurements of real SMS. Each observation includes the same information as in page line:information. Observations are stored in a dataframe.
* We assume SMS is deployed as feeder (as it is the case for the MATSim simulation on which we perform our analysis). Therefore, we can classify every SMS trip as either access or egress, depending on whether the origin or the destination is a PT stop.
* To establish the feeder area F(𝐬), we find among the observations O the furthest cell from s in which a trip to/from 𝐬 has occurred. All cells within such a distance, are assumed to be in F(𝐬). Observe that feeder areas of different hubs may overlap.
* We group observations in timeslots (Figure <ref>).
* In each time slot [t_k,t_k+1[ and each centroid 𝐮 around each stop 𝐬, we perform Kriging via library (<cit.>) to obtain estimations ŵ^𝐬(𝐮) and ŷ_t_k^𝐬(𝐮).
* We obtain stoptimes and edges using the estimations above, as specified in (<ref>). We add stoptimes and edges to the GTFS data of conventional PT, following the specifications in <cit.>.
* We give the obtained graph to CityChrone, which will give us accessibility scores in all the centroids.
§ RESULTS AND DISCUSSION
§.§ Data Source of the observations
The observation dataset in this study comes from a MATSim simulation, from <cit.>, of door-to-door Demand-Responsive Transit (DRT), deployed as a feeder to and from conventional PT, in Paris-Saclay.
The area in which DRT is deployed is depicted in Figure <ref>, but the entire Paris Region is simulated. Scenario parameters are in Table <ref>.
§.§ Analysis of Temporal and Spatial Patterns of DRT trips
Figure <ref> clearly shoes morning peak [7:00, 10:00[, evening peak [16:00, 19:00[ and off-peak (all the other intervals).
The following figures concern DRT trips toward/from all hubs, without distinguishing between hubs.
Figure <ref> is a negative result: travel times (figures on the right) do not appear to be spatially stationary (the distribution of values measured close to the related PT stops is different than further). Therefore, our estimations are not guaranteed to be asymptotically unbiased (page line:unbiased). In our future work, we will explore indirect estimation of travel times through other indicators, e.g., the detour factor of DRT, which respect the requirements for the unbiasedness of Kriging. Correlation between wait times and distance is instead weaker (Figure <ref>).
Figure <ref> shows that wait time follows expected peak/off-peak patterns. Values are generally slow since the simulation is configured so that a DRT trip is accepted only if it the dispatcher predicts it is possible to serve it within 10 minutes. All wait times exceeding this limits might be due to the dispatcher not taking traffic correctly into account.
§.§ Estimation of Waiting and Travel Times
Figure <ref> shows that timeslots of 1h preserve the temporal pattern of trips, so 1h should be preferred to smaller timeslots, so as to perform Kriging with as many observations as possible.
Within each timeslot, estimation of wait and travel times is based on Kriging, which exploits spatial correlation. First, we note in Figure <ref> that travel times close to hubs are shorter than further away. Then, we note that the experimental semivariance in Figure <ref>, i.e., the γ_i,j between pair of observation i,j (Equation (<ref>)), increases with the spatial distance between the observations: the closer the observations, the more similar are the respective travel times measured therein.
Such trends are not as evident for wait times (Figure <ref>) although similarity between observations still decay with distance (Figure <ref>).
§.§ Improvement of Accessibility Brought by DRT
Figure <ref> shows headway and travel times of the virtual DRT trips added to the PT graph. We can then compute accessibility on this graph. Note that accessibility varies with the time of day (<ref>). However, in the following figures we show averages over the time periods mentioned.
First, we study a system with DRT access services only (no egress).
Figure <ref> shows that the catchment area is expanded, especially in the south: hexagons with no access to PT within 15 minutes walk, can now use PT. Figure <ref> shows more clearly the improvement in accessibility brought by improved access to PT thanks to DRT. As only access SMS feeder is added in Paris Saclay, the areas outside Saclay do not show any changes, except sligth improvement in some locations, for instant south of Versaille, possibly due to the possibility for travelers starting from there to make changes in Saclay, which are enhanced by DRT.
Accessibility improvements are even greater in peak hours (Figures <ref> and <ref>, as DRT compensates for the low frequency of conventional PT.
Figure <ref> shows the improvement in accessibility when both access and egress trips are added, averaged over the entire day. Improvement is much greater than the access-DRT only case. Moreover, improvement is also visible also outiside Saclay: users from everywhere can now reach opportunities in Saclay faster, thanks to DRT egress connections.
§ CONCLUSIONS
We proposed a method to compute the impact of SMS on accessibility, based on empirical observations of SMS trips. Our method can support transport agencies and authorities in future deployment of SMS. In our future work, we will empirically validate the results by running simulations where we replaced simulated SMS with our estimated virtual trips. Finally, we will apply our method to car- or bike-sharing feeder and, possibly, on observations from real deployments.
§ ACKNOWLEDGEMENTS
This work has been supported by The French ANR research project MuTAS (ANR-21-CE22-0025-01) and by BayFrance. It has also been supported by the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement no. 899987
Data were provided by the Anthropolis research project at the SystemX Technological Research
Institute, supported by the French government under the “France 2030” program.
|
http://arxiv.org/abs/2307.00499v1
|
20230702071928
|
Almost sure bounds for a weighted Steinhaus random multiplicative function
|
[
"Seth Hardy"
] |
math.NT
|
[
"math.NT",
"11K65 (Primary)"
] |
CQLite: Communication-Efficient Multi-Robot Exploration Using Coverage-biased Distributed Q-Learning
Ehsan Latif Ramviyas Parasuraman
School of Computing, University of Georgia, Athens, GA 30602, USA
Author emails: {ehsan.latif;ramviyas}@uga.edu
=====================================================================================================================================================
We obtain almost sure bounds for the weighted sum ∑_n ≤ tf(n)/√(n), where f(n) is a Steinhaus random multiplicative function. Specifically, we obtain the bounds predicted by exponentiating the law of the iterated logarithm, giving sharp upper and lower bounds.
§ INTRODUCTION
The Steinhaus random variable is a complex random variable that is uniformly distributed on the unit circle { z : |z| = 1 } in the complex plane. Letting (f(p))_p prime be independent Steinhaus random variables, we define the Steinhaus random multiplicative function to be the (completely) multiplicative extension of f to the natural numbers. That is
f(n) = ∏_p | n f(p)^v_p(n),
where v_p (n) is the p-adic valuation of n. Weighted sums of Steinhaus f(n) were studied in recent work of <cit.> as a model for the Riemann zeta function on the critical line. Noting that
ζ(1/2 + it) = ∑_n ≤ |t|1/n^1/2 + it + o(1),
they modelled the zeta function at height t on the critical line by the function
M_f (t) ∑_n ≤ tf(n)/√(n),
for f a Steinhaus random multiplicative function. The motivation for this model is that the function n^-it is multiplicative, it takes values on the complex unit circle, and (p^-it)_p prime are asymptotically independent for any finite collection of primes.
In their work studying M_f (t), Aymone, Heap, and Zhao proved an upper bound analogous to a conjecture of <cit.> on the size of the zeta function on the critical line, which states that
max_t ∈ [T,2T] |ζ(1/2 + it)| = exp( (1 + o(1)) √(1/2log T loglog T)) .
Due to the oscillations of the zeta function, the events that model this maximum size involve sampling T log T independent copies of M_f (t).
Despite being the “wrong” object to study with regards to the maximum of the zeta function, one may also wish to find the correct size for the almost sure large fluctuations of M_f(x), since this is an interesting problem in the theory of random multiplicative functions. In this direction, Aymone, Heap, and Zhao obtained an upper bound of
M_f (x) ≪ (log x)^1/2 + ε,
almost surely, for any ε > 0. This is on the level of squareroot cancellation, since M_f(x) has variance of approximately log x. Furthermore, they obtained the lower bound that for any L>0,
lim sup_x →∞|M_f (x)|/exp((L+o(1))√(loglog x))≥ 1,
almost surely. If close to optimal, this lower bound demonstrates a far greater degree of cancellation than the upper bound, and suggests that M_f is being dictated by its Euler product. One may expect that
|M_f (x)| ≈|∏_p ≤ x( 1 - f(p)/√(p))^-1| ≈exp(∑_p ≤ x f(p) /√(p)),
and the law of the iterated logarithm (see, for example, <cit.>, chapter 8) suggests that
lim sup_x →∞∑_p ≤ x ( f(p) ) / √(p)/√(log_2 x log_4 x) = 1 ,
where log_k denotes the k-fold iterated logarithm. In this paper we prove the following results, which confirm the strong relation between M_f (x) and the Euler product of f.
For any ε > 0, we have
M_f (x) ≪exp((1+ε)√(log_2 x log_4 x)) ,
almost surely.
For any ε > 0, we have
lim sup_x→∞|M_f(x)|/exp((1-ε)√(log_2 x log_4 x))≥ 1 ,
almost surely.
These are the best possible results one could hope for, with upper and lower bounds of the same shape, matching the law of the iterated logarithm.
One of the most celebrated upper bound results in the literature is that of <cit.>, who found an upper bound for unweighted partial sums of the Rademacher multiplicative function. Originally introduced by <cit.> as a model for the Möbius function, the Rademacher random multiplicative function is the multiplicative function supported on square-free integers, with (f(p))_p prime independent and taking values { -1, 1 } with probability 1/2 each. In this paper, Wintner showed that for Rademacher f we have roughly squareroot cancellation, in that
∑_n ≤ x f(n) ≪ x^1/2 + ε,
almost surely, for any ε>0. Lau, Tenenbaum, and Wu obtained a far more precise result, proving that for Rademacher f,
∑_n ≤ x f(n) ≪√(x)(loglog x)^2 + ε,
almost surely, for any ε>0, and recent work of <cit.> has improved this result. Indeed, we find that similar techniques to those of Lau, Tenenbaum, and Wu, as well as more recent work on connecting random multiplicative functions to their Euler products (see <cit.>) lead to improvements over the bounds from <cit.>. Note that the weights 1/√(n) in the sum M_f (x) give a far stronger relation to the underlying Euler product of f than in the unweighted case, so finding the “true size” of large fluctuations is relatively more straightforward.
§.§ Outline of the proof of Theorem <ref>
For the proof of the upper bound we first partition the natural numbers into intervals, say [x_i-1,x_i), so that M_f (x) doesn't vary too much over these intervals. If the fluctuations of M_f (x) between test points (x_i) is small enough, then it suffices to just get an upper bound only on these (x_i). This is the approach taken by both <cit.> and <cit.>. The latter took this a step further and considered each test point x_i as lying inside some larger interval, say [X_l-1, X_l). These larger intervals determine the initial splitting of our sum, which takes the shape
M_f (x_i) = ∑_n ≤ x_i
P(n) ≤ y_0f(n)/√(n) + ∑_y_j-1 < m ≤ x_i
p | m ⇒ p ∈ (y_j-1, y_j]f(m)/√(m)∑_n ≤ x_i / m
P(n) ≤ y_j-1f(n)/√(n),
with the parameters (y_j)_j=0^J depending on l. One finds that the first term and the innermost sum of the second term behave roughly like F_y_j (1/2), for y_j the smoothness parameter, where F_y (s) ∏_p ≤ y ( 1 - f(p)/p^s)^-1. Obtaining this relation is a critical step in our proof. The first sum can be seen to behave like the Euler product F_y_0 (1/2) by simply completing the range n ≤ x_i to all n ∈. The inner sum of the second term is trickier, and we first have to condition on f(p) for y_j-1 < p ≤ y_j in the outer range so that we can focus entirely on understanding these inner sums over smooth numbers. Having conditioned, it is possible for us to replace our outer sums with integrals, allowing application of the following key result, which has seen abundant use in the study of random multiplicative functions (see for example <cit.>, <cit.>, <cit.>, or <cit.>).
[(5.26) of <cit.>]
Let (a_n)_n=1^∞ be a sequence of complex numbers, and let A(s) = ∑_n=1^∞a_n/n^s denote the corresponding Dirichlet series, and σ_c the abscissa of convergence. Then for any σ > max{ 0, σ_c }, we have
∫_0^∞|∑_n ≤ x a_n|^2/x^1 + 2 σ dx = 1/2π∫_-∞^∞| A(σ + i t)/σ + it|^2 dt .
It is then a case of extracting the Euler product from the integral. To do this, we employ techniques from <cit.>, noting that some factors of the Euler product remain approximately constant over small ranges of integration. We then show that these Euler products don't exceed the anticipated size coming from the law of the iterated logarithm. To do this, we consider a sparser third set of points, (X̃_k), chosen so that the variance of ∑_p ≤X̃_k f(p)/√(p) grows geometrically in k. These intervals mimic those used in classical proofs of the law of the iterated logarithm (for example, in chapter 8 of <cit.>), and are necessary to obtain a sharp upper bound by an application of Borel–Cantelli.
§.§ Outline of the proof of Theorem <ref>
The proof of the lower bound is easier, instead relying on an application of the second Borel–Cantelli lemma. The aim is to show that, for some appropriately chosen points (T_k), the function |M_f (t)| takes a large value between T_k-1 and T_k infinitely often with probability 1. We begin by noting that
max_t ∈ [T_k-1, T_k] | M_f (t) |^2 ≥1/log T_k∫_T_k-1^T_k|M_f (t)|^2/t^1 + σ dt ,
for some small convenient σ > 0. Over this interval we have M_f (t) = ∑_n ≤ t : P(n) ≤ T_k f(n)/√(n), and so we may work with this instead. We now just need to complete the integral to the range [1,∞) so that we can apply Harmonic Analysis Result <ref>, and again obtain the Euler product. This can be done by utilising the upper bound from Theorem <ref> to complete the lower range of the integral, and an application of Markov's inequality shows that the contribution from the upper range is almost surely small when σ is chosen appropriately. After some standard manipulations to remove the integral on the Euler product side, one can find that, roughly speaking,
max_t ∈ [T_k-1, T_k] |M_f (t)|^2 ≥exp( 2 ∑_p ≤ T_k f(p)/√(p)) + O(E(k)),
occurs infinitely often almost surely, for some relatively small error term E(k).
The proof is then completed using the Berry-Esseen Theorem and the second Borel–Cantelli lemma, following closely a standard proof of the law of the iterated logarithm (this time we follow Varadhan, <cit.>, section 3.9).
§ UPPER BOUND
§.§ Bounding variation between test points
We first introduce a useful lemma that will be used for expectation calculations throughout the paper.
Let {a(n)}_n ∈ be a sequence of complex numbers, with only finitely many a(n) nonzero. For any l ∈, we have
| ∑_n ≥ 1a(n) f(n)/√(n)|^2l≤( ∑_n ≥ 1|a(n)|^2 τ_l (n)/n)^l,
where τ_l denotes the l-divisor function, τ_l (n) = #{(a_1,...,a_l): a_1 a_2 ... a_l = n }, and we write τ(n) for τ_2 (n).
This is Lemma 9 of <cit.>. It is proved by conjugating, taking the expectation, and applying Cauchy–Schwarz.
There exists a small constant c ∈ (0,1), such that, with
x_i = ⌊ e^i^c⌋,
we have the bound
max_x_i-1 < x ≤ x_i | M_f (x) - M_f (x_i-1) | ≪ 1 a.s.
This result closely resembles Lemma 2.3 of <cit.>, who proved a similar result for (unweighted) Rademacher f. We note that their lemma purely relies on the fourth moment of partial sums of f(n) being small. For f Steinhaus, an application of Lemma <ref> implies that for u ≤ v,
| ∑_u < n ≤ v f(n) |^4 ≤(∑_u < n ≤ vτ(n) )^2.
Now, if additionally u ≍ v, then by Theorem 12.4 of Titchmarsh <cit.>, we have
∑_u < n ≤ vτ(n) = v log v - u log u + (2 γ - 1)(v-u) + O(v^1/3)
= (v-u) log u + v log (v/u) + (2 γ - 1)(v-u) + O(v^1/3)
≪ (v-u) log u + O(v^1/3) .
So certainly
| ∑_u < n ≤ v f(n) |^4 ≪ v^2/3 (v-u)^4/3 (log v)^52/3,
which is the fourth moment bound in the work of <cit.> (equation (2.5)). Note that it suffices to consider u ≍ v, since for c ∈ (0,1), we have x_i-1≍ x_i. The rest of their proof then goes through for Steinhaus f, so that for some c ∈ (0,1), we have
max_x_i-1 < x ≤ x_i| ∑_x_i-1 < n ≤ x f(n) | ≪√(x_i)/log x_i.
It then follows from Abel summation that
max_x_i-1 < x ≤ x_i| ∑_x_i-1 < n ≤ xf(n)/√(n)| ≪√(x_i/x_i-1)1/log x_i≪ 1,
as required. We fix the value of c ∈ (0,1) for the remainder of this section, and remark that this bound is stronger than we need.
§.§ Bounding on test points
To complete the proof of Theorem <ref>, it suffices to prove the following proposition For any ε>0, we have
M_f (x_i) ≪exp((1+ε) √(log_2 x_i log_4 x_i)), ∀ i ,
almost surely.
By the triangle inequality, we have
|M_f(x)| ≤ |M_f(x_i-1)| + max_x_i-1 < x ≤ x_i | M_f (x) - M_f (x_i-1) |.
Theorem <ref> then follows from Proposition <ref> (which bounds the first term) and Lemma <ref> (which bounds the second term).
The rest of this section is devoted to proving Proposition <ref>. We begin by fixing ε>0. Throughout we will assume this is sufficiently small, and implied constants (from ≪ or “Big Oh” notation) will depend only on ε, unless stated otherwise. Beginning similarly to <cit.>, we define the points X_l = e^e^l, and for some α∈ (0,1/2) chosen at the end of subsection <ref>, we define
y_0 = exp(ce^l/6l), y_j = y_j-1^e^α, for 1 ≤ j ≤ J,
2.01
where J is minimal so that y_J ≥ X_l. One can calculate that
J ≪log l/α.
2.02
The points X_l partition the positive numbers so that each x_i lies inside some interval [X_l-1, X_l). As mentioned, we also consider X_l-1 as being inside some very large intervals [X̃_k-1, X̃_k), where X̃_k = exp(exp(ρ^k)) for some ρ>1 depending only on ε, specified at the end of subsection <ref>. Throughout we will assume that k, and subsequently i and l, are sufficiently large. To prove Proposition <ref>, it suffices to show that the probability of
𝒜_k = {sup_X̃_k-1≤ X_l-1 < X̃_ksup_X_l-1≤ x_i < X_l|M_f (x_i)|/exp((1 + ε)√(log_2 x_i log_4 x_i)) > 4 },
is summable in k, since this will allow for application of the first Borel–Cantelli lemma. As mentioned, we first split the sum according to the prime factorisation of each n,
M_f (x_i) = S_i,0 + ∑_1 ≤ j ≤ J S_i,j,
where
S_i,0 = ∑_n ≤ x_i
P(n) ≤ y_0f(n)/√(n), 2.03
S_i,j = ∑_y_j-1 < m ≤ x_i
p | m ⇒ p ∈ (y_j-1, y_j]f(m)/√(m)∑_n ≤ x_i / m
P(n) ≤ y_j-1f(n)/√(n). 2.04
It is fairly straightforward to write S_i,0 in terms of an Euler product by completing the sum over n. The S_i,j terms are a bit more complicated, and we will have to do some conditioning to obtain the Euler products which we expect dictate the inner sums. Similar ideas play a key role in the work of <cit.>. With this in mind, we have
(𝒜_k) ≤(ℬ_0,k) + (ℬ_1,k), 2.05
where
ℬ_0,k = {sup_X̃_k-1≤ X_l-1 < X̃_ksup_X_l-1≤ x_i < X_l|S_i,0|/exp((1 + ε)√(log_2 x_i log_4 x_i)) > 2 }, 2.06
ℬ_1,k = {sup_X̃_k-1≤ X_l-1 < X̃_ksup_X_l-1≤ x_i < X_l∑_1 ≤ j ≤ J |S_i,j|/exp((1 + ε)√(log_2 x_i log_4 x_i)) > 2 } .
It suffices to prove that both (ℬ_0,k) and (ℬ_1,k) are summable.
§.§ Conditioning on likely events
To proceed, we will utilise the following events, recalling that F_y (s) = ∏_p ≤ y ( 1 - f(p)/p^s)^-1.
G_j,l = {sup_p ≤ y_j-1 |F_p (1/2)|/exp((1 + ε)√(log_2 X_l-1log_4 X_l-1))≤1/l^5}, 2.07
I_j,l^(1) = {∫_-1/log y_j-1^1/log y_j-1|F_y_j-1(1/2 + 1/log X_l + it)/F_y_j-1 (1/2)|^2 dt ≤l^4/log y_j-1} ,
I_j,l^(2) = {∑_1/log y_j-1≤ |T| ≤ 1/2
T dyadic1/T^2∫_T^2T| F_y_j-1(1/2 + 1/log X_l + it)/F_e^1/T (1/2)|^2 dt ≤ l^4 log y_j-1} ,
I_j,l^(3) = {∫_1/2^∞|F_y_j-1(1/2 + 1/log X_l + it)|^2 + |F_y_j-1(1/2 + 1/log X_l - it)|^2/t^2 dt ≤ l^4 log y_j-1}.
The summand in the events I_j,l^(2) should be adjusted for negative T, in which case one should flip the range of integration, and instead take F_e^1/|T|(1/2) in the denominator of the integrand. For the sake of tidiness, we have left out these conditions.
These events will be very useful to condition on when it comes to estimating the probabilities in (<ref>). Ideally, all of these events will occur eventually, and we will show that this is the case with probability one. Therefore, we define the following intersections of these events, giving “nice behaviour” for S_i,j for all i,j where x_i runs over the range [X_l-1,X_l) for X_l-1∈ [X̃_k-1,X̃_k). We stress that J (defined in (<ref>)) depends on l .
G_k = ⋂_l : X̃_k-1≤ X_l-1 < X̃_k⋂_j=1^J G_j,l , I_j,l = ⋂_r=1^3 I_j,l^(r) , I_k = ⋂_l : X̃_k-1≤ X_l-1 < X̃_k⋂_j=1^J I_j,l . 2.08
Proposition <ref> follows if (G_k^c) and (I_k^c) are summable.
We will later show that (G_k^c) and (I_k^c) are indeed summable in subsections <ref> and <ref> respectively. We proceed with proving this proposition, which is quite difficult and constitutes a large part of the paper.
First we will show that (ℬ_0,k) is summable. It follows from definition (<ref>) that
S_i,0 = F_y_0 (1/2) - ∑_n > x_i
P(n) ≤ y_0f(n)/√(n).
By the triangle inequality (recalling (<ref>)), we have
( ℬ_0,k) ≤( sup_X̃_k-1≤ X_l-1 < X̃_k|F_y_0(1/2 )|/exp((1 + ε)√(log_2 X_l-1log_4 X_l-1)) > 1 )
+ ( sup_X̃_k-1≤ X_l-1 < X̃_ksup_X_l-1≤ x_i < X_l| ∑_n > x_i
P(n) ≤ y_0f(n)/√(n)|/exp((1 + ε)√(log_2 X_l-1log_4 X_l-1)) > 1 ).
We note that (G_k^c) (where G_k is as defined in (<ref>)) is larger than this first term.
Since we are assuming that (G_k^c) is summable, we need only show that the second term is summable. By the union bound and Markov's inequality with second moments (using Lemma <ref> to evaluate the expectation, which is applicable by the dominated convergence theorem), we have
( sup_X̃_k-1≤ X_l-1 < X̃_ksup_X_l-1≤ x_i < X_l| ∑_n > x_i
P(n) ≤ y_0f(n)/√(n)|/exp((1 + ε)√(log_2 X_l-1log_4 X_l-1)) > 1 ) 2.09
≤∑_X̃_k-1≤ X_l-1 < X̃_k∑_X_l-1≤ x_i < X_l∑_n > x_i
P(n) ≤ y_01/n/exp(2(1 + ε)√(log_2 X̃_k-1log_4 X̃_k-1)).
Here we apply Rankin's trick to note that
∑_n > x_i
P(n) ≤ y_01/n ≤ x_i^-1/log y_0∏_p ≤ y_0( 1 - 1/p^1 - 1/log y_0)^-1≪log y_0/x_i^1/log y_0.
Recalling that y_0 = exp(c e^l/6 l), we can bound the probability (<ref>) by
≪ 1/exp(2√(loglogX̃_k-1))∑_X̃_k-1≤ X_l-1 < X̃_k∑_X_l-1≤ x_i < X_llog y_0/x_i^1 / log y_0
≪ 1/exp(2ρ^(k-1)/2)∑_X̃_k-1≤ X_l-1 < X̃_k1/l e^l(6/ce - 1/c - 1)≪1/exp(2ρ^(k-1)/2),
which is summable (with c as in subsection <ref>). Hence if (G_k^c) is summable, then (ℬ_0,k) is summable, as required.
We now proceed to show that (ℬ_1,k) is summable, which will conclude the proof of Proposition <ref>. Here we introduce the events in (<ref>), giving
(ℬ_1,k) ≤(ℬ_1,k∩ G_k ∩ I_k) + (G_k^c) + (I_k^c).
Therefore, assuming the summability of the trailing terms, it suffices to show that (ℬ_1,k∩ G_k ∩ I_k) is summable. As in <cit.> (equation (3.16)), by the union bound, then taking 2q'th moments and using Hölder's inequality, we have
(ℬ_1,k∩ G_k ∩ I_k) ≤∑_X̃_k-1≤ X_l-1 < X̃_k∑_X_l-1≤ x_i < X_l∑_1 ≤ j ≤ J(|S_i,j|^2q1_G_j,l∩ I_j,l) J^2q-1/exp(2q(1 + ε)√(log_2 x_i log_4 x_i)) . 2.10
We will choose q ∈ℕ depending on k at the very end of this subsection. We let ℱ_y_j-1 = σ({f(p): p ≤ y_j-1}) be the σ-algebra generated by f(p) for all p ≤ y_j-1, forming a filtration. Note that G_j,l and I_j,l are ℱ_y_j-1-measurable. We introduce a function V of x_i that slowly goes to infinity with i, specified at the end of subsection <ref>.
Recalling the definition of S_i,j from (<ref>), by our expectation result (Lemma <ref>), we have
[ |S_i,j|^2q1_G_j,l∩ I_j,l] = [ (|S_i,j|^2q1_G_j,l∩ I_j,l | ℱ_y_j-1) ]
≤[ 1_G_j,l∩ I_j,l( ∑_y_j-1 < m ≤ x_i
p|m ⇒ p ∈ (y_j-1,y_j]τ_q (m)/m| ∑_n ≤ x_i/m
P(n) ≤ y_j-1f(n)/√(n)|^2 )^q ]
= [ 1_G_j,l∩ I_j,l( ∑_y_j-1 < m ≤ x_i
p|m ⇒ p ∈ (y_j-1,y_j]V τ_q (m)/m^2∫_m^m(1+1/V)| ∑_n ≤ x_i/m
P(n) ≤ y_j-1f(n)/√(n)|^2 dt )^q ]
≤ 2^3q( 𝔼( 𝒞_i,j^q ) + 𝔼( 𝒟_i,j^q ) ) , 2.11
where
𝒞_i,j = 1_G_j,l∩ I_j,l∑_y_j-1 < m ≤ x_i
p|m ⇒ p ∈ (y_j-1,y_j]V τ_q (m)/m^2∫_m^m(1+1/V)| ∑_n ≤ x_i/t
P(n) ≤ y_j-1f(n)/√(n)|^2 dt , 2.12
𝒟_i,j =∑_y_j-1 < m ≤ x_i
p|m ⇒ p ∈ (y_j-1,y_j]V τ_q (m)/m^2∫_m^m(1+1/V)| ∑_x_i /t < n ≤ x_i/m
P(n) ≤ y_j-1f(n)/√(n)|^2 dt ,
and we have used the fact that |A+B|^r ≤ 2^r (|A|^r + |B|^r).
§.§ Bounding the main term 𝒞_i,j
We will see that our choices of G_j,l and I_j,l completely determine an upper bound for 𝒞_i,j. We first swap the order of summation and integration to obtain
𝒞_i,j = 1_G_j,l∩ I_j,l∫_y_j-1^x_i| ∑_n ≤ x_i/t
P(n) ≤ y_j-1f(n)/√(n)|^2 ∑_t / (1 + 1/V) ≤ m ≤ t
p|m ⇒ p ∈ (y_j-1, y_j]V τ_q (m)/m^2 dt .
2.13
To estimate the sum over the divisor function we employ the following result from Harper, <cit.> (section 2.1, referred to also as Number Theory Result 1 there).
Let 0 < δ < 1, let r ≥ 1 and suppose max{ 3, 2r }≤ y ≤ z ≤ y^2 and that 1 < u ≤ v(1-y^-δ). Let Ω(m) equal the number of prime factors of m counting multiplicity. Then
∑_u ≤ m ≤ v
p | m ⇒ y ≤ p ≤ z r^Ω(m)≪_δ(v-u)r/log y∏_y ≤ p ≤ z( 1 - r/p)^-1 .
We note that τ_q (m) ≤ q^Ω(m) by submultiplicativity of τ_q. The above result is applicable assuming that V is, say, smaller than √(y_0), and q is an integer with 2q ≤ y_0 (indeed, q will be approximately l ≤ y_0 / 2 and V will be roughly (log X_l)^2l^2≤√(y_0)), in which case we have
∑_t / (1 + 1/V) ≤ m ≤ t
p|m ⇒ p ∈ (y_j-1, y_j]V τ_q (m)/m^2≪ V/t^2∑_t / (1 + 1/V) ≤ m ≤ t
p|m ⇒ p ∈ (y_j-1, y_j]τ_q (m) ≪V/t^2∑_t / (1 + 1/V) ≤ m ≤ t
p|m ⇒ p ∈ (y_j-1, y_j] q^Ω(m)2.14
≪ q/t log y_j-1∏_y_j-1 < p ≤ y_j( 1 - q/p)^-1.
Since q will be very small compared to y_0 (in particular, q = o(log y_0)), we have
∏_y_j-1 < p ≤ y_j( 1 - q/p)^-1≪( log y_j/log y_j-1)^q.
Using the above and (<ref>), we have
𝒞_i,j≪q 1_G_j,l∩ I_j,l/log y_j-1( log y_j/log y_j-1)^q∫_y_j-1^x_i| ∑_n ≤ x_i/t
P(n) ≤ y_j-1f(n)/√(n)|^2 dt/t.
Proceeding similarly to Harper <cit.>, we perform the change of variables z = x_i / t, giving
𝒞_i,j≪q 1_G_j,l∩ I_j,l/log y_j-1( log y_j/log y_j-1)^q∫_1^x_i/y_j-1| ∑_n ≤ z
P(n) ≤ y_j-1f(n)/√(n)|^2 dz/z.
To apply Harmonic Analysis Result <ref>, we need the power of z in the denominator of the integrand to be greater than 1, and so we introduce a factor of (1 / z)^2/log x_i. By the definitions of y_j-1 and y_j from (<ref>), we have
𝒞_i,j ≪q e^α q1_G_j,l∩ I_j,l/log y_j-1∫_1^x_i/y_j-1| ∑_n ≤ z
P(n) ≤ y_j-1f(n)/√(n)|^2 dz/z^1 + 2/log x_i2.15
≪q e^α q1_G_j,l∩ I_j,l/log y_j-1∫_1^∞| ∑_n ≤ z
P(n) ≤ y_j-1f(n)/√(n)|^2 dz/z^1 + 2/log X_l ,
where we have completed the range of the integral to [1, ∞), and used the fact that x_i < X_l, allowing us to remove dependence on x_i without much loss, since log x_i varies by a constant factor for x_i ∈ [X_l-1,X_l). This is a key point: we have related M_f (x_i) to an Euler product which depends only on the large interval [X_l-1, X_l) in which x_i lies. We now apply Harmonic Analysis Result <ref>, giving
𝒞_i,j≪q e^α q1_G_j,l∩ I_j,l/log y_j-1∫_-∞^∞| F_y_j-1 ( 1/2 + 1/log X_l + it )/1 / log X_l + it|^2 dt .
2.16
This integral is not completely straightforward to handle, as the variable of integration is tied up with the random Euler-product F_y_j-1. To proceed, we follow the ideas of <cit.> in performing a dyadic decomposition of the integral, and introducing constant factors (with respect to t, but random) that allow us to extract the approximate size of the integral over certain ranges. The size of these terms is then handled using the conditioning on I_j,l (recalling the definitions from (<ref>) and (<ref>)).
First of all, note that over the interval [T,2T], the factor p^it = e^it log p varies a bounded amount for any p ≤ e^1/T. Therefore, the Euler factors (1 - f(p)/p^1/2 + 1/ log X_l + it)^-1 are approximately constant on [T,2T] for p ≤ e^1/T. Subsequently, when appropriate, we will approximate the numerator by |F_e^1/T (1/2)|^2. We write
∫_-∞^∞| F_y_j-1 ( 1/2 + 1/log X_l + it )/1 / log X_l + it|^2 dt ≤∫_-1/log y_j-1^1/log y_j-1 + ∑_1/log y_j-1≤ |T| ≤ 1/2
T dyadic∫_T^2T + ∫_1/2^∞ + ∫_-∞^-1/2 , 2.17
where each integrand is the same as that on the left hand side. Here, “T dyadic” means that we will consider T = 2^n/log y_j-1 so that T lies in the given range. Negative T are considered similarly, and one should make the appropriate adjustments in accordance with Remark <ref>. For the first integral on the right hand side of (<ref>), we have
∫_-1/log y_j-1^1/log y_j-1| F_y_j-1(1/2 + 1/log X_l + it)/1/log X_l + it|^2 dt
≤ (log X_l )^2 ∫_-1/log y_j-1^1/log y_j-1| F_y_j-1(1/2 + 1/log X_l + it)/F_y_j-1(1/2)|^2 dt |F_y_j-1 (1/2)|^2
≤l^4 (log X_l)^2/log y_j-1|F_y_j-1 (1/2)|^2 ,
due to conditioning on I_j,l^(1) in (<ref>). We proceed similarly for the second term on the right hand side of (<ref>), as we have
∑_1/log y_j-1≤ |T| ≤ 1/2
T dyadic ∫_T^2T| F_y_j-1(1/2 + 1/log X_l + it)/1/log X_l + it|^2 dt
≤∑_1/log y_j-1≤ |T| ≤ 1/2
T dyadic1/T^2∫_T^2T| F_y_j-1(1/2 + 1/log X_l + it)/F_e^1/T (1/2)|^2 dt |F_e^1/T (1/2)|^2
≤ l^4 log y_j-1sup_1/log y_j-1≤ T ≤ 1/2|F_e^1/T(1/2)|^2,
by the conditioning on I^(2)_j,l. Finally, the last two integrals can be bounded directly from the conditioning on I_j,l^(3). Therefore, we find that the integral on the left hand side of (<ref>) is
≪l^4 (log X_l)^2 /log y_j-1sup_p ≤ y_j-1|F_p (1/2)|^2,
and so by (<ref>), we have
𝒞_i,j≪ q e^α q l^4 (log X_l)^2 1_G_j,l∩ I_j,l/(log y_j-1)^2sup_p ≤ y_j-1|F_p (1/2)|^2 .
We bound the Euler product term using our conditioning on G_j,l from (<ref>),
𝒞_i,j≪ q e^α q (log X_l)^2/l^6( log y_j-1)^2exp(2 (1 + ε) √(log_2 X_l-1log_4 X_l-1)) .
2.18
§.§ Bounding the error term 𝒟_i,j
We now proceed with bounding 𝔼( 𝒟_i,j^q ), where 𝒟_i,j is defined in (<ref>). Similarly to Harper <cit.> (in `Proof of Propositions 4.1 and 4.2') we first consider ( 𝔼( 𝒟_i,j^q ) )^1/q, giving us access to Minkowski's inequality. By definition, we have
( 𝔼( 𝒟_i,j^q ) )^1/q = [ ( ∑_y_j-1 < m ≤ x_i
p|m ⇒ p ∈ (y_j-1,y_j]V τ_q (m)/m^2∫_m^m(1+1/V)| ∑_x_i /t < n ≤ x_i/m
P(n) ≤ y_j-1f(n)/√(n)|^2 dt )^q ]^1/q,
and by Minkowski's inequality,
( 𝔼( 𝒟_i,j^q ) )^1/q≤∑_y_j-1 < m ≤ x_i
p|m ⇒ p ∈ (y_j-1,y_j]τ_q (m)/m[ ( V/m∫_m^m(1+1/V)| ∑_x_i /t < n ≤ x_i/m
P(n) ≤ y_j-1f(n)/√(n)|^2 dt )^q ]^1/q.
Now applying Hölder's inequality (noting that the integral is normalised) and splitting the outer sum over m at x_i/V, we have
( 𝔼( 𝒟_i,j^q ) )^1/q ≤∑_y_j-1 < m ≤ x_i/V
p|m ⇒ p ∈ (y_j-1,y_j]τ_q (m)/m[ V/m∫_m^m(1+1/V)| ∑_x_i /t < n ≤ x_i/m
P(n) ≤ y_j-1f(n)/√(n)|^2q dt ]^1/q2.19
+ ∑_x_i / V < m ≤ x_i
p|m ⇒ p ∈ (y_j-1,y_j]τ_q (m)/m[ V/m∫_m^m(1+1/V)| ∑_x_i /t < n ≤ x_i/m
P(n) ≤ y_j-1f(n)/√(n)|^2q dt ]^1/q.
We will show that these terms on the right hand side are small. Beginning with the second term, we note that the length of the innermost sum over n is at most x_i/m(1 - 1/1+1/V), and since m > x_i /V, this is ≤1/1+1/V < 1. Therefore, the innermost sum contains at most one term, giving the upper bound
∑_x_i / V < m ≤ x_i
p|m ⇒ p ∈ (y_j-1,y_j]τ_q (m)/m[ V/m∫_m^m(1+1/V)t^q/x_i^q dt ]^1/q≤2/x_i∑_x_i / V < m ≤ x_i
p|m ⇒ p ∈ (y_j-1,y_j]τ_q (m),
where we have taken the maximum value of t in the integral and assumed that 1+1/V < 2, since V will go to infinity with i. Similarly to (<ref>), we use sub-multiplicativity of τ_q (m) and apply Number Theory Result <ref> (whose conditions are certainly satisfied on the same assumptions as for (<ref>)), giving a bound
≤2/x_i∑_x_i / V < m ≤ x_i
p|m ⇒ p ∈ (y_j-1,y_j] q^Ω(m)≪q/log y_j-1∏_y_j-1 < p ≤ y_j-1( 1 - q/p)^-1≪q e^α q/log y_j-1, 2.20
which will turn out to be a sufficient bound for our purpose. We now bound the first term of (<ref>), which requires a little more work. We first use Lemma <ref> to evaluate the expectation in the integrand. This gives the upper bound
∑_y_j-1 < m ≤ x_i/V
p|m ⇒ p ∈ (y_j-1,y_j]τ_q (m)/m[ V/m∫_m^m(1+1/V)( ∑_x_i /t < n ≤ x_i/m
P(n) ≤ y_j-1τ_q (n)/n)^q dt ]^1/q.
Applying Cauchy–Schwarz, we get an upper bound of
∑_y_j-1 < m ≤ x_i/V
p|m ⇒ p ∈ (y_j-1,y_j]τ_q (m)/m[ V/m∫_m^m(1+1/V)( ( ∑_x_i /t < n ≤ x_i/m
P(n) ≤ y_j-11/n^2) ( ∑_x_i /t < n ≤ x_i/m
P(n) ≤ y_j-1τ_q^2 (n) ) )^q/2 dt ]^1/q
≤ ∑_y_j-1 < m ≤ x_i/V
p|m ⇒ p ∈ (y_j-1,y_j]τ_q (m)/m( ∑_x_i /m(1+1/V) < n ≤ x_i/m1/n^2)^1/2( ∑_n ≤ x_i/m
τ_q^2 (n) )^1/2,
where we have taken t maximal and used the fact that τ_q (n)^2 ≤τ_q^2 (n). By a length-max estimate, one can find that ∑_x_i /m(1+1/V) < n ≤ x_i/m1/n^2≪m/x_i V. Furthermore, using the fact that ∑_n ≤ xτ_k (x) ≤ x(2log x)^k-1 for x≥ 3, k ≥ 1 (see Lemma 3.1 of <cit.>), we obtain the bound
≪1/V^1/2∑_y_j-1 < m ≤ x_i/V
p|m ⇒ p ∈ (y_j-1,y_j]τ_q (m)/m( 2 log x_i )^q^2/2 ,
Completing the sum over m, we have the upper bound
≪1/V^1/2∑_m ≥ 1
p|m ⇒ p ∈ (y_j-1,y_j]τ_q (m)/m( 2 log x_i )^q^2/2 ≪1/V^1/2( 2 log x_i )^q^2/2∏_y_j-1 < p ≤ y_j( 1 - 1/p)^-q
≪2^q^2/2 e^α q (log x_i)^q^2/2/V^1/2.
Combining this bound with the bound for the second term (<ref>), we get a bound for the right hand side of (<ref>), from which it follows that
( 𝒟_i,j^q ) ≤ K^q ( q e^α q/log y_j-1 + 2^q^2/2 e^α q (log x_i)^q^2/2/V^1/2)^q ,
for some absolute constant K>0. Taking V = (log x_i)^2q^2, and α = 1/q, this bound will certainly be negligible compared to the main term coming from (<ref>). We remark that this value of V is appropriate for use in Number Theory Result <ref> in (<ref>) and (<ref>).
§.§ Completing the proof of Proposition <ref>
Since the main term from (<ref>) dominates the error term above, from (<ref>) we obtain that
(|S_i,j|^2q1_G_j,l∩ I_j,l ) ≤( R_ε q (log X_l)^2 exp(2 (1 + ε) √(log_2 X_l-1log_4 X_l-1))/l^6 (log y_j-1)^2)^q .
for some positive constant R_ε from the “Big Oh” implied constant in (<ref>). Now (<ref>) gives a bound on the probability
(ℬ_1,k∩ G_k ∩ I_k) ≤∑_X̃_k-1≤ X_l-1 < X̃_k∑_X_l-1≤ x_i < X_l∑_1 ≤ j ≤ J J^2q-1( R_ε q (log X_l)^2 /l^6 (log y_j-1)^2)^q
≤∑_X̃_k-1≤ X_l-1 < X̃_k∑_X_l-1≤ x_i < X_l( 16 R_ε J^2 q/c^2 l^4)^q .
We take q = ⌊ρ^k ⌋ = ⌊loglogX̃_k ⌋, which satisfies the assumptions for Number Theory Result <ref> in (<ref>) and (<ref>). Using the fact J ≪ρ^k log l ≪ k ρ^k from (<ref>), and noting that there are no more than e^l/c terms in the innermost sum, and no more than ρ^k terms in the outermost sum, and that ρ^k-1≤ l ≤ρ^k+1 for large k, we find that taking trivial bounds gives
(ℬ_1,k∩ G_k ∩ I_k) ≪( R'_ε k^2/ρ^k)^⌊ρ^k⌋,
when k is sufficiently large, for some constant R'_ε>0 depending only on ε (since ρ>1 depends only on ε). Therefore, (ℬ_1,k∩ G_k ∩ I_k) is summable. Recalling (<ref>), this completes the proof of Proposition <ref>.
§.§ Law of the iterated logarithm-type bound for the Euler product
In this subsection, we prove that (G_k^c) (as defined in (<ref>)) is summable. Recall X̃_k = e^e^ρ^k for some ρ > 1 depending on ε, chosen shortly. It suffices to prove that
( sup_X̃_k-1≤ X_l-1 < X̃_ksup_p ≤ X_l |F_p (1/2)|/exp((1 + ε/2)√(log_2 X̃_k-1log_4 X̃_k-1)) > 1 ), 2.21
is summable in k, noting that l^5 = (log_2 X_l)^5 = o(exp(√(log_2 X_l-1))), and so we removed the l^5 factor in (<ref>) by altering ε in the denominator.
To prove (<ref>), we will utilise two standard results from probability.
[Lévy inequality, Theorem 3.7.1 of <cit.>]
Let X_1, X_2,... be independent, symmetric random variables and S_n = X_1 + X_2 + ... + X_n. Then for any x,
(max_1 ≤ m ≤ n S_m > x) ≤ 2 (S_n > x) .
Our S_m will more or less be the random walk ∑_p ≤ m f(p)/√(p). This result tells us that the distribution of the maximum of a random walk is controlled by the distribution of the endpoint, allowing us to remove the supremum in (<ref>). The next result will allow us to handle the resulting term.
[Upper exponential bound, Lemma 8.2.1 of <cit.>]
Let X_1, X_2,... be mean zero independent random variables. Let σ_k^2 = X_k, and s_n^2 = ∑_k=1^n σ_k^2. Furthermore, suppose that, for c_n > 0,
|X_k| ≤ c_n s_n a.s. for k = 1,2,...,n .
Then, for 0 < x < 1/c_n,
( ∑_k=1^n X_k > x s_n ) ≤exp( - x^2/2( 1 - x c_n/2) ) .
We proceed by writing the probability in (<ref>) as
( sup_x ≤ Z|∏_p ≤ x(1 - f(p)/√(p))^-1|/exp((1 + ε/2)√(log_2 X̃_k-1log_4 X̃_k-1)) > 1 ),
where Z = exp(exp(⌈ρ^k ⌉)) is the largest possible value that X_l can take; it is minimal so that Z = X_l > X̃_k. Taking the exponential of the logarithm of the numerator, the above probability is equal to
( sup_x ≤ Z -∑_p ≤ xlog(1 - f(p)/√(p)) > (1 + ε/2) √(log_2 X̃_k-1log_4 X̃_k-1))
= ( sup_x ≤ Z∑_p ≤ x∑_k ≥ 1 f(p)^k/k p ^k/2 > (1 + ε/2) √(log_2 X̃_k-1log_4 X̃_k-1))
≤ ( sup_x ≤ Z∑_p ≤ x( f(p)/√(p) + f(p)^2/2p) > (1 + ε/3) √(log_2 X̃_k-1log_4 X̃_k-1))
≤ ( sup_x ≤ Z∑_p ≤ x f(p)/√(p) > (1 + ε/4) √(log_2 X̃_k-1log_4 X̃_k-1))
+ ( sup_x ≤ Z∑_p ≤ x f(p)^2/2p > ε/12√(log_2 X̃_k-1log_4 X̃_k-1))
These probabilities can be bounded by the Lévy inequality, Probability Result <ref>. The second probability is then summable by Markov's inequality with second moments. It remains to show that
( ∑_p ≤ Z f(p)/√(p) > (1 + ε/4)√(log_2 X̃_k-1log_4 X̃_k-1)), 2.22
is summable, which we prove using the upper exponential bound (Probability Result <ref>). By a straightforward calculation using the fact that 2 (z) = z + z̅, we have [ f(p)/√(p) ] = 1/2p. Therefore we have s_Z^2 = ∑_p ≤ Z 1/2p. Let c_Z = 2 / s_Z. Certainly such a choice satisfies | f(p)/√(p)| ≤ c_Z s_Z for all primes p, so Probability Result <ref> implies that for any x ≤ 1 / c_Z = s_Z / 2,
( ∑_p ≤ Z f(p)/√(p) > x ( ∑_p ≤ Z1/2p)^1/2) ≤exp( - x^2/2( 1 - x/( ∑_p ≤ Z 1/2p)^1/2) ).
We take
x = (1 + ε/4) (log_2 X̃_k-1log_4 X̃_k-1/∑_p ≤ Z 1/2p)^1/2.
Recall that Z = exp(exp(⌈ρ^k ⌉ )). Using the fact that Z > X̃_k-1, it is not hard to show that, for large k, this value of x is applicable, seeing as x ≪√(log_4 Z) and s_Z ≫√(log_2 Z), hence x < s_Z/2. This value of x gives an upper bound for the probability in (<ref>) of
≤ 2 exp( - (1 + ε/4)^2 log_2 X̃_k-1log_4 X̃_k-1/∑_p ≤ Z 1/p( 1 - (1 + ε/4)√(log_2 X̃_k-1log_4 X̃_k-1)/∑_p ≤ Z 1/2p ) ),
Since for large k we have ∑_p ≤ Z 1/2p ≫log_2 Z ≫log_2 X̃_k-1, we find that the term in the innermost parenthesis is of size 1 + o(1). Furthermore, since ∑_p ≤ Z 1/p = log_2 Z + O(1), the previous equation is bounded above by
≪exp( - (1 + o(1)) (1 + ε/4)^2 log_2 X̃_k-1log_4 X̃_k-1/log_2 Z + O(1)).
Inserting the definitions X̃_k-1 = exp(exp(ρ^k-1)) and Z = exp(exp(⌈ρ^k⌉)), this is
≪exp( - (1 + o(1)) (1 + ε/4)^2 ρ^k-1log ((k-1) logρ)/⌈ρ^k⌉ + O(1)).
Note that for ρ>1 fixed, for sufficiently large k we have ⌈ρ^k ⌉≤ρ^k+1. Therefore, the last term can be bounded above by
≪1/((k-1)logρ)^(1 + ε/4)^2(1 + o(1))/ρ^2.
Taking ρ sufficiently close to 1 (in terms of ε), this is summable in k. Subsequently, the probability (<ref>) is summable, as required.
§.§ Probability of complements of integral events are summable
Here we prove that ℙ(I_k^c) is summable. Recalling (<ref>) and (<ref>), we note that by the union bound, it suffices to show that the following are summable.
I_1,k^c ⋃_l : X̃_k-1≤ X_l-1 < X̃_k⋃_j=1^J {∫_-1/log y_j-1^1/log y_j-1|F_y_j-1(1/2 + 1/log X_l + it)/F_y_j-1 (1/2)|^2 dt > l^4/log y_j-1} , 2.23
I_2,k^c ⋃_l : X̃_k-1≤ X_l-1 < X̃_k⋃_j=1^J {∑_1/log y_j-1≤ |T| ≤ 1/2
T dyadic1/T^2∫_T^2T| F_y_j-1(1/2 + 1/log X_l + it)/F_e^1/T (1/2)|^2 dt > l^4 log y_j-1} ,
I_3,k^c ⋃_l : X̃_k-1≤ X_l-1 < X̃_k⋃_j=1^J {∫_1/2^∞|F_y_j-1(1/2 + 1/log X_l + it)|^2 + |F_y_j-1(1/2 + 1/log X_l - it)|^2/t^2 dt > l^4 log y_j-1}.
To prove that these events have summable probabilities, we wish to apply Markov's inequality, and so we need to be able to evaluate the expectation of the integrands. We employ the following result, which is similar to Lemma 3.1 of <cit.>.
For any σ >0, t ∈, and any x,y ≥ 2 such that x ≤ y and σlog y ≤ 1, we have
| F_y (1/2 + σ + it)/F_x (1/2)|^2 ≪exp(C t^2 (log x)^2)( log y/log x),
for some absolute constant C>0, and where the implied constant is also absolute.
Our choices for the range of the integrals and the denominators in our integrands, made in subsection <ref>, ensure that |t|(log x) is bounded when we apply the above result.
The proof follows from standard techniques used in Euler Product Result 1 of <cit.>, the key difference being that we do not have σ in the argument of the denominator. We therefore find that
| F_y (1/2 + σ + it)/F_x (1/2)|^2 = ∏_p ≤ x( 1 + |p^- σ - it - 1|^2/p + O ( 1/p^3/2) ) ∏_x < p ≤ y( 1 + 1/p^1 + 2 σ + O( 1/p^3/2) )
≪exp( ∑_p ≤ x|p^-σ - it-1|^2/p) ( log y/log x). 2.24
To bound the first term, we use the fact that cos x ≥ 1-x^2 for all x ∈, giving
|p^-σ - it -1 |^2 = p^-2σ - 2p^-σcos (t log p) + 1
≤ p^-2σ - 2p^-σ + 1 + 2p^-σ t^2 (log p)^2
≤ (p^-σ-1)^2 + 2p^-σ t^2 (log p)^2
≤σ^2 (log p)^2 + 2t^2 (log p)^2,
where on the last line we have used the fact that |e^-x - 1| ≤ x for x > 0. Inserting this into (<ref>) gives
| F_y (1/2 + σ + it)/F_x (1/2)|^2 ≪exp( ∑_p ≤ xσ^2 (log p)^2 + 2t^2 (log p)^2/p) ( log y/log x)
≪exp( C (σ^2 + 2t^2)(log x)^2 ) ( log y/log x),
using the fact that ∑_p ≤ x (log p)^2/p ≤ C (log x)^2 for some C>0 to obtain the last line. The desired result (upon exchanging 2C for C) follows by noting that σlog x ≤σlog y ≤ 1.
Equipped with this result, we apply the union bound and Markov's inequality with first moments to show that each of the events in (<ref>) have probabilities that are summable. For the first event, this gives
(I_1,k^c) ≤∑_X̃_k-1≤ X_l-1 < X̃_k∑_j=1^J log y_j-1/l^4∫_-1/log y_j-1^1/ log y_j-1|F_y_j-1(1/2 + 1/log X_l + it)/F_y_j-1 (1/2)|^2 dt ,
Now, by Euler Product Result <ref>, for some absolute constant C>0, we have
(I_1,k^c) ≤∑_X̃_k-1≤ X_l-1 < X̃_k∑_j=1^J log y_j-1/l^4∫_-1/log y_j-1^1/ log y_j-1exp(C t^2 (log y_j-1)^2) dt
≪∑_X̃_k-1≤ X_l-1 < X̃_k∑_j=1^J 1/l^4≪∑_X̃_k-1≤ X_l-1 < X̃_kρ^k logρ^k/l^4≪k/ρ^2k,
where in the second inequality we have used the fact that the integrand is bounded. Therefore (I_1,k^c) is summable. The probability of the second event, (I_2,k^c), can be handled almost identically. To show that (I_3,k^c) is summable, we note that |F_y_j-1 (1/2 + 1 / log X_l + it)|^2 ≪log y_j-1 (this is a fairly straightforward calculation and follows from Euler Product Result 1 of <cit.>), and one can then apply an identical strategy to the above. Note that we can apply Fubini's Theorem in this case, since the integrand is absolutely convergent.
Therefore we have verified the assumptions of Proposition <ref>, completing the proof of the upper bound, Theorem <ref>.
§ LOWER BOUND
In this section, we give a proof of Theorem <ref>. We shall prove that for any ε > 0,
( max_t ∈ [T_k-1,T_k] |M_f (t)|^2 ≥exp(2(1 - ε)√(log_2 T_k log_4 T_k)) i.o.) = 1, 3.01
for some intervals (T_k), from which Theorem <ref> follows.
Fix ε>0 and assume that it is sufficiently small throughout the argument, and that k is sufficiently large. Implied constants from ≪ or “Big Oh” notation will depend on ε, unless stated otherwise. We take T_k = exp(exp(λ^k)), for some fixed λ > 1 (depending only on ε) chosen later. These intervals are of similar shape to the intervals X̃_k in the upper bound, however here we will take λ to be very large. Doing this allows for use of Borel–Cantelli lemma 2, seen as the terms we obtain, ∑_p ≤ T_k f(p)/√(p), will be controlled by the independent sums ∑_T_k-1 < p ≤ T_k f(p)/√(p). This is an approach taken in many standard proofs of the lower bound in the law of the iterated logarithm (see, for example, section 3.9 of Varadhan <cit.>).
Since ∫_T_k-1^T_k 1/t dt ≤log T_k, we have
max_t ∈ [T_k-1,T_k] |M_f (t)|^2 ≥1/log T_k∫_T_k-1^T_k|M_f(t)|^2/t^1 + 2 loglog T_k / log T_k dt, 3.02
where the 2 loglog T_k / log T_k term has been introduced to allow use of Harmonic Analysis Result <ref> at little cost, similarly to (<ref>), whilst being sufficiently large so that we can complete the upper range of the integral without compromising our lower bound.
We now complete the range of the integral so that it runs from 1 to infinity. For the lower range, by Theorem <ref>, we almost surely have, say,
1/log T_k∫_1^T_k-1|M_f(t)|^2/t^1 + 2 loglog T_k / log T_k dt ≪exp(3 √(log_2 T_k-1log_4 T_k-1)). 3.03
Whereas for the upper integral, we almost surely have
∫_T_k^∞|∑_n ≤ t
n T_k-smoothf(n)/√(n)|^2/t^1 + 2 loglog T_k / log T_k dt ≤ 1 ,
3.04
for sufficiently large k. This follows from the first Borel–Cantelli lemma, since Markov's inequality followed by Fubini's Theorem gives
( ∫_T_k^∞|∑_n ≤ t
n T_k-smoothf(n)/√(n)|^2/t^1 + 2 loglog T_k / log T_k dt > 1 ) ≤∫_T_k^∞|∑_n ≤ t
n T_k-smoothf(n)/√(n)|^2/t^1 + 2 loglog T_k / log T_k dt
≪∫_T_k^∞log T_k/t^1 + 2 loglog T_k / log T_k dt≪1/loglog T_k = 1/λ^k,
which is summable. Now combining (<ref>), (<ref>) and (<ref>) we have that almost surely, for large k,
max_t ∈ [T_k-1,T_k] |M_f (t)|^2 ≥1/log T_k∫_1^∞|∑_n ≤ t
n T_k - smoothf(n)/√(n)|^2/t^1 + 2 loglog T_k / log T_k dt - Cexp(3 √(log_2 T_k-1log_4 T_k-1)), 3.05
for some constant C>0. We proceed by trying to lower bound the first term on the right hand side of this equation. By Harmonic Analysis Result <ref>, we have
1/log T_k∫_1^∞|∑_n ≤ t
n T_k - smoothf(n)/√(n)|^2/t^1 + 2 loglog T_k / log T_k dt = 1/2 πlog T_k∫_-∞^∞| F_T_k(1/2 + loglog T_k / log T_k + it)/loglog T_k / log T_k + i t|^2 dt
≥(1 + o(1))log T_k/2 π (loglog T_k)^2∫_-1/2log T_k^1/2log T_k| F_T_k(1/2 + loglog T_k/log T_k + it) |^2 dt .
This last term on the right hand side is equal to
1+o(1)/2 π (loglog T_k)^2∫_-1/2log T_k^1/2log T_kexp(2 log| F_T_k(1/2 + loglog T_k/log T_k + it) |) log T_k dt.
Note that log T_k dt is a probability measure on the interval that we are integrating over. Since the exponential function is convex, we can apply Jensen's inequality as in the work of <cit.>, section 6, (see also <cit.>, section 4) to obtain the following lower bound for the first term on the right hand side of (<ref>)
1+o(1)/2 π (loglog T_k)^2exp( ∫_-1/2log T_k^1/2log T_k 2 log| F_T_k(1/2 + loglog T_k/log T_k + it) | log T_k dt )
= 1+o(1)/2 π (loglog T_k)^2exp( ∫_-1/2log T_k^1/2log T_k -2 ∑_p ≤ T_klog( 1 - f(p)/p^1/2 + loglog T_k / log T_k + it) log T_k dt )
= 1+o(1)/2 π (loglog T_k)^2exp( 2 log T_k ∑_p ≤ T_k∫_-1/2log T_k^1/2log T_k f(p)/p^1/2 + σ_k + it + f(p)^2/2p^1 + 2 σ_k + 2it + O ( 1/p^3/2) dt ),
where σ_k = loglog T_k / log T_k. Since 1/p^3/2 is summable over primes, this term can be bounded below by
c'/(loglog T_k)^2exp( 2 log T_k ∑_p ≤ T_k∫_-1/2log T_k^1/2log T_k f(p) /p^1/2 + σ_k + it + f(p)^2/2p^1 + 2 σ_k + 2it dt ),
for some constant c' > 0. The argument of the exponential is very similar to ∑_p ≤ T_k f(p)/p^1/2, which puts us in good stead for the law of the iterated logarithm.
Note that
∫_-1/2log T_k^1/2log T_k p^-it dt = 2 sin( log p/2log T_k)/log p, and ∫_-1/2log T_k^1/2log T_k p^-2it dt = 1/log T_k + O ( (log p)^2/(log T_k)^3).
Therefore, we get a lower bound for the first term on the right hand side of (<ref>) of
c'/(loglog T_k)^2exp( 2 log T_k ∑_p ≤ T_k( 2 f(p) sin(log p/2 log T_k)/p^1/2 + σ_klog p + f(p)^2 /2p^1 + 2 σ_klog T_k + O ( (log p)^2/p( log T_k)^3) ) )
≥c”/(loglog T_k)^2exp( 2 ∑_p ≤ T_k( 2 f(p) (log T_k) sin(log p/2 log T_k)/p^1/2 + σ_klog p + f(p)^2/2p^1 + 2 σ_k) ) , 3.06
for some constant c” > 0, where we have used the fact that ∑_p ≤ T_k (log p)^2/p ≪ (log T_k)^2.
To prove (<ref>), it suffices to prove that
(∑_p ≤ T_k2 f(p) (log T_k) sin(log p/2 log T_k)/p^1/2 + σ_klog p + f(p)^2 /2p^1 + 2 σ_k≥(1-ε/3)√(log_2 T_k log_4 T_k) i.o.) = 1, 3.07
since, if this were true, it would follow from (<ref>) and (<ref>) that almost surely,
max_t ∈ [T_k-1,T_k] |M_f (t)|^2/exp(2(1 - ε)√(log_2 T_k log_4 T_k))≥c”exp(4ε/3 √(log_2 T_k log_4 T_k))/2 (loglog T_k)^2+ o ( 1 )
infinitely often, and for any λ > 1, the right hand side is larger than 1 for large k.
Therefore, to complete the proof, we just need to show that (<ref>) holds. This follows from a fairly straightforward application of the Berry-Esseen Theorem and the second Borel–Cantelli lemma, as in the proof of the law of the iterated logarithm in section 3.9 of Varadhan <cit.>. We first analyse the independent sums over p in the disjoint ranges (T_k-1, T_k], which will control the sum in (<ref>) when λ is large.
[Berry-Esseen Theorem, Theorem 7.6.2 of <cit.>]
Let X_1, X_2,... be independent random variables with zero mean and let S_n = X_1 + ... + X_n. Suppose that γ_k^3 = |X_k|^3 < ∞ for all k, and set σ_k^2 = [ X_k], s_n^2 = ∑_k=1^n σ_k^2, and β_n^3 = ∑_k=1^n γ_k^3. Then
sup_x ∈| (S_n > x s_n) - 1/√(2π)∫_x^∞ e^-t^2/2 dt | ≤ C β_n^3/s_n^3,
for some absolute constant C>0.
If we take
x = (1 - ε/2) (log_2 T_k log_4 T_k/∑_T_k-1 < p ≤ T_k1/2 p^1 + 2 σ_k( 2 log T_k/log p)^2 sin^2 ( log p/2 log T_k) + 1/8p^2 + 4 σ_k)^1/2 , 3.08
then, since the denominator in the parenthesis is the variance of our sum, for some constant C̃>0 independent of k, we have
(∑_T_k-1 < p ≤ T_k 2 f(p) (log T_k) sin(log p/2 log T_k)/p^1/2 + σ_klog p + f(p)^2/2p^1 + 2 σ_k≥ (1-ε/2)√(log_2 T_k log_4 T_k))
≥1/√(2π)∫_x^∞ e^-t^2/2 dt - C̃/(∑_T_k-1 < p ≤ T_k1/2 p^1 + 2 σ_k( 2 log T_k/log p)^2 sin^2 ( log p/2 log T_k) + 1/8p^2 + 4 σ_k)^3/2 . 3.09
Here we have used the fact that the sums over third moments of our summand are uniformly bounded regardless of k, giving a bound of size C̃ for the β_n terms in the Theorem.
To prove (<ref>), it is sufficient to show that the right hand side of (<ref>) is not summable in k. The result will then follow by the second Borel–Cantelli lemma, and a short argument used to complete the lower range of the sum. Note that the second Borel–Cantelli lemma is applicable since our events are independent for distinct values of k. To proceed, it will be helpful to lower bound the sums of the variances,
∑_T_k-1 < p ≤ T_k( 1/2 p^1 + 2 σ_k( 2 log T_k/log p)^2 sin^2 ( log p/2 log T_k) + 1/8p^2 + 4 σ_k).
By shortening the sum and noting that 1/u^2sin^2 u ≥ 1 - ε/4 for u sufficiently small, when k is large we have the lower bound
∑_T_k-1 < p ≤ (1 - ε/4)^-1 / 2 σ_k1/2 p^1 + 2 σ_k( 2 log T_k/log p )^2 sin^2 ( log p/2 log T_k) ≥( 1 - ε/4 ) ∑_T_k-1 < p ≤ (1 - ε/4)^-1 / 2 σ_k1/2 p^1 + 2 σ_k
≥( 1 - ε/2 ) ∑_T_k-1 < p ≤ (1 - ε/4)^-1 / 2 σ_k1/2p
≥1 - ε/2/2loglog T_k + O ( loglog T_k-1),
recalling that σ_k = loglog T_k / log T_k. Since loglog T_k = λ^k, this lower bound implies that the second term on the right hand side of (<ref>) is summable. Therefore, we just need to show that the first term on the right hand side is not. By standard estimates, we have 1/√(2π)∫_u^∞ e^-t^2/2 du ≫1/u e^-u^2/2 for all u ≥ 1. Since the above lower bound gives an upper bound for x from (<ref>), we find that
1/√(2π)∫_x^∞ e^-t^2/2 dt ≫1/log_4 T_kexp( -(1 - ε/2)^2 log_2 T_k log_4 T_k/2∑_T_k-1 < p ≤ T_k1/2 p^1 + 2 σ_k( 2 log T_k/log p)^2 sin^2 ( log p/2 log T_k) + 1/8p^2 + 4 σ_k)
≫1/log (k logλ)exp( -(1 - ε/2) log_2 T_k log_4 T_k/log_2 T_k + O( log_2 T_k-1))
≫1/log (k logλ)exp( - (1 - ε/2)log (k logλ)/1 + O( 1/λ)),
where all implied constants depend at most on ε. Here we have used the fact that T_k = exp(exp(λ^k)). Taking λ sufficiently large in terms of ε, we have
1/√(2π)∫_x^∞ e^-t^2/2 dt ≫1/k^1 - ε/4,
which is not summable over k. This proves that we almost surely have
∑_T_k-1 < p ≤ T_k2 f(p) (log T_k) sin(log p/2 log T_k)/p^1/2 + σ_klog p + f(p)^2/2p^1 + 2 σ_k≥(1-ε/2)√(log_2 T_k log_4 T_k),
infinitely often. The statement (<ref>) then follows by noting that we can complete the above sum to the whole range p ≤ T_k, seen as one can apply Probability Result <ref> very similarly to subsection <ref> to show that almost surely, for large k,
∑_p ≤ T_k-12 f(p) (log T_k) sin(log p/2 log T_k)/p^1/2 + σ_klog p + f(p)^2/2p^1 + 2 σ_k≤ε/6 √(log_2 T_k log_4 T_k),
when λ is sufficiently large in terms of ε. This allows us to deduce that almost surely,
∑_p ≤ T_k2 f(p) (log T_k) sin(log p/2 log T_k)/p^1/2 + σ_klog p + f(p)^2 /2p^1 + 2 σ_k≥ ( 1 - ε/3) √(log_2 T_k log_4 T_k),
infinitely often, if λ is taken to be sufficiently large in terms of ε. Therefore, (<ref>) holds, completing the proof of Theorem <ref>.
§.§.§ Acknowledgements
The author would like to thank his supervisor, Adam Harper, for the suggestion of this problem, for many useful discussions, and for carefully reading an earlier version of this paper.
|
http://arxiv.org/abs/2307.03202v1
|
20230705180003
|
Do old globular clusters in low mass galaxies disprove modified gravity?
|
[
"Michal Bílek",
"Hongsheng Zhao",
"Benoit Famaey",
"Srikanth T. Nagesh",
"Françoise Combes",
"Oliver Müller",
"Michael Hilker",
"Pavel Kroupa",
"Rodrigo Ibata"
] |
astro-ph.GA
|
[
"astro-ph.GA"
] |
Bílek et al.
IAU Symposium 379: Template
17
2023
10.1017/xxxxx
#1 #1
Proceedings of IAU Symposium 379
P. Bonifacio, M.-R. Cioni, F. Hammer, M. Pawlowski, and S. Taibi, eds.
^1 LERMA, Observatoire de Paris, CNRS, PSL Univ., Sorbonne Univ., 75014 Paris, France
^2 Collège de France, 11 place Marcelin Berthelot, 75005 Paris, France
^3 Université de Strasbourg, CNRS, Observatoire astronomique de Strasbourg, UMR 7550, F-67000 Strasbourg, France
^4 Scottish Universities Physics Alliance, University of St Andrews, North Haugh, St Andrews, Fife KY16 9SS, UK
^5 Institute of Physics, Laboratory of Astrophysics, Ecole Polytechnique Fèdèrale de Lausanne (EPFL), 1290 Sauverny, Switzerland
^6 European Southern Observatory, Karl-Schwarzschild-Strasse 2, 85748 Garching bei München, Germany
^7Helmholtz-Institut für Strahlen- und Kernphysik (HISKP), Universität Bonn, Nußallee 14-16, D-53115 Bonn, Germany
^8Astronomical Institute, Faculty of Mathematics and Physics, Charles University in Prague, V Holešovičkách 2, CZ-18000 Praha, Czech Republic
The controversy “dark matter vs. modified gravity” constitutes a major topic of discussion. It was proposed that dynamical friction could be used to discriminate between the two alternatives. Analytic calculations indicate that, with modified gravity, globular clusters (GCs) of low-mass galaxies experience much stronger dynamical friction than in the equivalent system with Newtonian gravity and dark matter. As a result, in modified gravity the old GCs of low mass galaxies should have already settled in the centers of the galaxies. This is not observed. Here we report on our efforts to verify the analytic results by self-consistent simulations with the MOND-type (modified Newtonian dynamics) gravity. The core stalling mechanism, that was not considered in the analytic calculations, prevents GCs to settle in centers of ultra-diffuse galaxies. For isolated dwarf galaxies, which are gas-rich objects, supernova explosions prevent the GCs from settling.
Dwarf galaxies, globular clusters, dynamical friction, modified gravity
Do old globular clusters in low-mass galaxies disprove modified gravity?
Bílek M.^1,2,3, Zhao H.^4, Famaey B.^3, Nagesh S. T. ^3, Combes F.^1,2, Müller O.^5, Hilker M.^6, Kroupa P.^7,8, Ibata R.^3
August 1, 2023
===============================================================================================================================
§ INTRODUCTION
The nature of gravity and inertia is not clear yet. A generally accepted theory of quantum gravity is missing. There are indications that our imperfect knowledge of the laws or gravity or inertia is the reason why we encounter the missing mass problem. The strongest indication for this is the fact that dynamical properties of most galaxies can be predicted from the distribution of their visible material <cit.> using the prescriptions of modified Newtonian dynamics (MOND, ), a paradigm of modified gravity and/or inertia. This indicates that if indeed gravity or inertia need to be updated, then the correct theory has to follow MOND in certain regimes. Nevertheless, alternatives to such theories as the explanation of the missing mass problem exist, such as the hypothesis of dark matter.
One can test these hypotheses using dynamical friction <cit.>. It is a force acting against the direction of motion of two interacting N-body systems. It happens because the relative orbital energy of the systems is transferred into the internal energy of their constituents, that is stars, gas, or dark matter particles.
We are still rather at the beginning of the investigation of dynamical friction in MOND. Its exact functioning depends to an unknown degree on the particular MOND theory. We thus consider hereafter only the MOND modified gravity theories of <cit.> and <cit.>. Simulations have shown that once we deal with interactions of two objects with similar masses (such as a major merger of galaxies), dynamical friction turns out to be weaker in MOND than in an equivalent Newtonian system with collisionless dark matter <cit.>. On the other hand, analytic calculations indicate that once the two interacting objects have a large mass ratio, dynamical friction is stronger in the MOND system than in the equivalent Newtonian system. The enhancement of dynamical friction is generally larger for objects with a lower surface brightness. According to analytic calculations, the strong dynamical friction makes globular clusters (GCs) of low-mass galaxies spiral down the center of their host on the timescale of a few Gyr. This opposes the observed ages of the GCs in such galaxies of many Gyr. We decided to verify the analytic results by high-resolution simulations.
§ CASE OF ISOLATED ULTRA-DIFFUSE GALAXIES
Ultra-diffuse galaxies (UDGs) are characterized by masses of dwarf galaxies and sizes of giant galaxies. It was found that not only they contain old GCs, but these GCs can be exceptionally massive, exceeding 10^6 M_⊙. This should make the settling of the GCs in the center of their host galaxy (i.e. the process known as “sinking”) even faster. We simulated the evolution of such objects in MOND in <cit.>.
We used the MOND version of the adaptive-mesh-refinement code RAMSES <cit.>. The galaxy was simulated having an effective radius of 2 kpc and stellar mass of 2×10^8 M_⊙, i.e. resembling the UDG NGC 1052-DF2 that hosts very massive GCs. The galaxy was modeled as a spherical N-body isotropic system, the GCs as single massive particles. The maximum spatial resolution was 50 pc and the particles of the galaxy had 20 M_⊙ each. The galaxy was simulated as isolated.
We first let a GC having 10^6 M_⊙ fall on the UDG from 5 kpc with a zero relative velocity. The GC first experienced strong damping of its oscillation by dynamical friction. Nevertheless, the orbital decay of the GC suddenly slowed down once the GC moved within the inner 1 kpc of the galaxy. It is unlikely that the change was caused by an insufficient resolution: the friction became less efficient when the size of the orbit was about ten times larger than the resolution and the behavior did not change after increasing the resolution. The phenomenon is known from simulations with Newtonian gravity too under the name of core stalling. It happens in galaxies having nearly harmonic gravitational potentials (ϕ(r)∝ r^2) near their centers. Dynamical friction becomes ineffective because stars have equal orbital periods in such a potential. This behavior is not captured by the analytic calculations that predicted the fast sinking of GCs of low-mass galaxies in MOND. This is because of the simplifying assumptions done during their derivation.
With Newtonian gravity, the Chandrasekhar formula is usually used for evaluating the strength of dynamical friction. Its analog for MOND has been proposed by <cit.>. This formula indicated the problem of fast sinking of GCs of low-mass galaxies in MOND. We were tested the validity of this formula. We repeated the simulation of the free-fall of a GC varying the mass of the GC. Then we explored the motion of a 10^6 M_⊙ GC for different orbital shapes. It turned out that the Sánchez-Salcedo formula works well, unless the GC moves inside the inner 1 kpc of the galaxy, where the core stalling occurs. This is the same result as for the Chandrasekhar formula in Newtonian gravity. We found that the value of Coulomb logarithm in the Sánchez-Salcedo formula varies qualitatively with the orbital shape of the GCs as in the Chandrasekhar formula <cit.>.
We then simulated an evolution of a system of ten GCs. The GCs were distributed in the simulated galaxy in a similar fashion as observed in the DF2 galaxy. We considered two types of GC mass functions – the standard one, observe for most galaxies, and the mass function seen in the galaxy DF2 that is biased toward high GC masses. The system was evolved for a Hubble time. It turned out that the outer GCs are little affected by dynamical friction. Only the inner ones approached toward the galaxy center, but a full sinking was disabled by core stalling. In the result, the GC system became more centrally concentrated. This evolution was stronger for the DF2-like GC mass function. We predicted the profiles of surface number-density that should be observed in isolated UDGs. It should be noted however, that the DF2-like GC mass function has been reported so far only for the galaxy DF2 itself, for which our simulations do not apply because it is not isolated – it is exposed to a strong external field effect from a neighboring massive elliptical <cit.>. For such a galaxy, the Newtonian simulations of <cit.> are more relevant.
We then utilized symmetries of the equations governing the motion of the GCs to scale our simulation for UDGs of different masses and sizes. It helped us to verify for nearly all observed UDGs that the presence of GCs in them does not exclude MOND. Finally, we found that GCs can evaporate through tidal heating during their encounters.
§ CASE OF ISOLATED DWARF GALAXIES
In an ongoing project, we study the survival of GCs of isolated dwarf galaxies. The galaxies are gas-rich rotating objects. Equations indicate the GCs of the least massive dwarfs sink the fastest. We reviewed the literature on observations of GCs of isolated dwarfs and found that the least massive isolated dwarfs that are known to host GC candidates have baryonic masses of about 2×10^7 M_⊙, and that GCs are common in galaxies over about 10^8 M_⊙.
We simulated the motion of GCs in a galaxy of the mass of 10^8 M_⊙. The dwarf was initiated with 90% of gas, representing the likely state 10 Gyr ago, when old GCs are formed. The scale length of both the gas and stellar disks were 2 kpc. The GCs were again simulated as a point masses. The grid was refined if the mass in a cell exceeded 4×10^4 M_⊙ with the limit of 0.5 pc.
We first explored the motion of a GC of a relatively typical mass of 10^5 M_⊙ in simulations without star formation and supernova feedback. If the GC initially moves inside the plane of the disk, either on a circular co-rotating obit or a radial orbit, the GC approaches the center of the galaxy in about 1 Gyr. For a counter-rotating, polar or axial orbit, the GC does not sink within 10 Gyr. A GC of 10^4 M_⊙ does not show substantial sinking on any orbit within 10 Gyr.
We then included star formation and supernova feedback. We explored a wide range of star formation efficiencies and of parameters of the supernova models included in RAMSES. All simulations showed the same: supernovae prevent the GC from sinking regardless of the initial direction of motion of the GC. In MOND, where most of the mass of the dwarf is in gas, supernova explosions lead to large fluctuations of gravitational potential (Fig. <ref>). The fluctuations of the gravitational field give random kicks to the GC, which prevent it from settling at the galaxy center (Fig. <ref>). Supernova explosions can even change the sense of rotation of the GC around the center of the galaxy. With Newtonian gravity, this mechanism would be much reduced since dark matter, the dominant component, does not directly react to the supernovae.
iaulike
[Bekenstein & Milgrom1984]BM84 Bekenstein J., Milgrom M., 1984, ApJ, 286, 7.
[Bílek et al.2019]bil19 Bílek M., Samurović S., Renaud F., 2019, A&A, 625, A32.
[Bílek et al.2021]bil21 Bílek M., Zhao H., Famaey B., Müller O., Kroupa P., Ibata R., 2021, A&A, 653, A170.
[Chan et al.1997]chan97 Chan R., Mamon G. A., Gerbal D., 1997, ApL&C, 36, 47
[Ciotti & Binney2004]ciotti04 Ciotti L., Binney J., 2004, MNRAS, 351, 285.
[Dutta Chowdhury, van den Bosch, & van Dokkum2020]dutta20 Dutta Chowdhury D., van den Bosch F. C., van Dokkum P., 2020, ApJ, 903, 149. doi:10.3847/1538-4357/abb947
[Famaey, McGaugh, & Milgrom2018]famaey18 Famaey B., McGaugh S., Milgrom M., 2018, MNRAS, 480, 473.
[Haghi et al.2019]haghi19 Haghi H., Kroupa P., Banik I., Wu X., Zonoozi A. H., Javanmardi B., Ghari A., et al., 2019, MNRAS, 487, 2441.
[Kroupa2015]Kroupa2015 Kroupa P., 2015, CaJPh, 93, 169.
[Lelli et al.2017]lelli17 Lelli F., McGaugh S. S., Schombert J. M., Pawlowski M. S., 2017, ApJ, 836, 152.
[Lüghausen et al.2015]por Lüghausen F., Famaey B., Kroupa P., 2015, CaJPh, 93, 232.
[McGaugh et al.2016]mcgaugh16 McGaugh S. S., Lelli F., Schombert J. M., 2016, PhRvL, 117, 201101.
[Milgrom (1983)]milg83a Milgrom, M., 1983, ApJ 270, 365.
[Milgrom (2010)]qumond Milgrom, M., 2010, MNRAS, 403, 886.
[Milgrom (2019)]milg19 Milgrom, M., 2019, PRD 99, 044041.
[Sánchez-Salcedo et al.2006]sanchezsalcedo06 Sánchez-Salcedo F. J., Reyes-Iturbide J., Hernandez X., 2006, MNRAS, 370, 1829.
[Teyssier2002]ramses Teyssier R., 2002, A&A, 385, 337.
|
http://arxiv.org/abs/2307.03381v1
|
20230707043331
|
Teaching Arithmetic to Small Transformers
|
[
"Nayoung Lee",
"Kartik Sreenivasan",
"Jason D. Lee",
"Kangwook Lee",
"Dimitris Papailiopoulos"
] |
cs.LG
|
[
"cs.LG"
] |
Atomic screening and e^+e^- pair photoproduction at low energies.
A.I. Milstein
Abstract
0.9
We review the modular flavor symmetric models of quarks and leptons
focusing on our works.
We present some flavor models of quarks and leptons
by using finite modular groups and discuss the phenomenological implications.
The modular flavor symmetry gives interesting phenomena at the fixed point of
modulus. As a representative, we show the successful texture structure at the fixed point τ = ω.
We also study CP violation, which occurs through the modulus stabilization.
Finally,
we study SMEFT with modular flavor symmetry by including higher dimensional operators.
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Large language models like GPT-4 exhibit emergent capabilities across general-purpose tasks, such as basic arithmetic, when trained on extensive text data, even though these tasks are not explicitly encoded by the unsupervised, next-token prediction objective. This study investigates how small transformers, trained from random initialization, can efficiently learn arithmetic operations such as addition, multiplication, and elementary functions like square root, using the next-token prediction objective.
We first demonstrate that conventional training data is not the most effective for arithmetic learning, and simple formatting changes can significantly improve accuracy. This leads to sharp phase transitions as a function of training data scale, which, in some cases, can be explained through connections to low-rank matrix completion. Building on prior work, we then train on chain-of-thought style data that includes intermediate step results. Even in the complete absence of pretraining, this approach significantly and simultaneously improves accuracy, sample complexity, and convergence speed.
We also study the interplay between arithmetic and text data during training and examine the effects of few-shot prompting, pretraining, and model scale. Additionally, we discuss length generalization challenges. Our work highlights the importance of high-quality, instructive data that considers the particular characteristics of the next-word prediction objective for rapidly eliciting arithmetic capabilities.[Our code is available at <https://github.com/lee-ny/teaching_arithmetic>]
§ INTRODUCTION
Large language models like GPT-3/4, PaLM, LaMDA <cit.> have demonstrated general-purpose properties, often referred to as emergent abilities <cit.>, for a wide range of downstream tasks like language and code translation, compositional reasoning, and basic arithmetic operations <cit.>.
What is perhaps surprising, is that these tasks are not explicitly encoded in the model's training objective, which typically is an auto-regressive, next-token-prediction loss.
Prior research has delved into exploring these capabilities and how they emerge as the scale and of training compute, type of data, and model size vary <cit.>. Untangling the factors, however, remains challenging due to the data complexity and the variety of tasks examined.
Driven by the curiosity to understand the factors that elicit these capabilities in next-token predictors, we set out to pinpoint the key contributors that accelerate the emergence of such abilities. These contributors may include the format and scale of data, model scale, the presence of pre-training, and the manner of prompting.
To provide a more precise examination of these factors, our study is conducted in a controlled setting: we focus on teaching arithmetic to small transformer models, such as NanoGPT and GPT-2, when trained from random init. Starting with a model of 10.6 million parameters and scaling up to 124 million parameters, we use the standard autoregressive next-token prediction loss. Our objective is to understand how these models can efficiently learn basic arithmetic operations like addition, subtraction, multiplication, square root, and sine, thereby providing us with a clearer lens through which to view the elicitation of emergent abilities. Below, we summarize our findings.
Data format and sampling matters. We first observe that teaching a model addition (or any other operation) using standard addition samples, `𝖠_3𝖠_2𝖠_1+𝖡_3𝖡_1𝖡_1= 𝖢_3𝖢_2𝖢_1', is suboptimal, as it requires the model to evaluate the most significant digit 𝖢_3 of the result first, which depends globally on all the digits of the two summands. By training on samples with reversed results, `𝖠_3𝖠_2𝖠_1+𝖡_3𝖡_1𝖡_1= 𝖢_1𝖢_2𝖢_3', we enable the model to learn a simpler function, significantly improving sample complexity. Additionally, balanced sampling of different “variations” of addition, based on the number of carries and digits involved, further enhances learning. Even in this simple setting, we observe relatively sharp phase transitions from 0 to 100% accuracy as a function of the size of the training data. Although this may seem surprising, we observe that learning an addition map on n digits from random samples is equivalent to completing a low-rank matrix. This connection allows us to offer a reasonable explanation for such phase transitions.
Chain-of-thought data during training.
Building on these findings, we then explore the potential benefits of chain-of-thought (CoT) data during training. This format includes step-by-step operations and intermediate results, allowing the model to learn the individual components of complex tasks. This format is directly borrowed from related literature, e.g., <cit.>. We found that CoT-type training data significantly improved learning in terms of both sample complexity and accuracy in agreement with CoT fine-tuning literature <cit.>, though our observation holds even in the absence of language pretraining.
We conjecture that this is because breaking down the required compositional function to be learned into individual components allows the model to learn a higher-dimensional but easier-to-learn function map. In Figure <ref>, we provide examples of the four data formatting methods explored in our work.
Training on text and arithmetic mixtures and the role of few-shot prompting. We also explore the interplay between arithmetic and text data during training, as LLMs are trained on massive amounts of data scraped from the internet <cit.>, where it is impractical to carefully separate different types of data. We observe how the model's perplexity and accuracy vary with the ratio of text to arithmetic data. We find that learning all arithmetic operations discussed earlier (from addition to square root) can improve the individual performance of each task, and that going from zero-shot to 1-shot prompting (showing one arithmetic example) yields a large accuracy improvement, but there is no significant improvement in accuracy by showing more examples.
The role of pre-training and model scale.
We also investigate the role of pretraining by fine-tuning models like GPT-2 and GPT-3 () and observe that while the zero-shot performance on arithmetic operations is poor, the prior “skills” acquired during pretraining facilitate reasonable performance on some basic arithmetic tasks even with a small number of finetuning samples.
However, finetuning with non-standard formatting, such as reverse formatting, can interfere with the model's performance when pretrained on standard-formatted operations, leading to decreased accuracy.
Finally, we conduct studies on how performance in arithmetic changes with scale, and although we find that scale does indeed aid in learning arithmetic operations, it is not a necessary trait.
Compositional and length generalization. One might question if our trained models truly grasp arithmetic. Our findings present a nuanced answer. We find length generalization beyond trained digit lengths difficult. For instance, if a model is trained on all n-digit lengths, excluding a specific length, it struggles to compensate and accurately calculate this missing digit length. Consequently, the models achieve high accuracy within trained digit lengths but struggle significantly beyond this range. This suggests that the models learn arithmetic not as a flexible algorithm, but more as a mapping function constrained to trained digit lengths. While this surpasses mere memorization, it falls short of comprehensive arithmetic “understanding”.
Novelty over prior work. Our approach heavily builds upon prior work that uses instructive data to enhance model performance, and we do not claim novelty in the style of training data employed. What sets our work apart is the primary focus on randomly initialized models and extensive ablation studies on various sampling/data formatting and model scale settings to isolate the factors that contribute to the fast emergence of arithmetic capabilities. Furthermore, our work offers a few simple but perhaps insightful theoretical justifications of some of the phenomena we observe.
§ RELATED WORKS
Instructional data/chain-of-thought. The idea of using detailed reasoning training data predates Transformers <cit.>. <cit.> use natural language to generate reasoning steps while <cit.> show that symbolic reasoning may suffice. <cit.> note that large number of samples with small digits is important for arithmetic tasks <cit.>. <cit.> observe a correlation between the frequency of numbers in the dataset and the performance involving them whereas we find that transformers can learn to add numbers that were not seen during training. Chain-of-thought <cit.> refers to the model's improved performance when prompted to produce rationale. <cit.> show that this can be achieved by providing sufficiently informative exemplars as a few-shot prompt <cit.>. <cit.> showed that least-to-most prompting can help GPT-3 solve problems that can be decomposed into simpler sub-problems. Least-to-most prompting consists of first decomposing a complex problem into easier subproblems, and then sequentially solving these subproblems. We extend this notion to simple addition and show that asking the model to output the least significant bit first has a similar effect. <cit.> shows that very often even just prompting the model with “let's think step by step” is sufficient to achieve competitive zero-shot accuracy on several benchmark datasets.
Arithmetic using Transformer models. Our work focuses on decoder-only models since they are well-suited for text generation and are widely used in LLMs <cit.>. However, encoder-decoder models have also been extensively studied in the literature in the context of learning arithmetic <cit.>. <cit.> explore techniques to improve the arithmetic abilities of pretrained LLMs. <cit.> on the other hand, focus on the impact of the learned embeddings. Most results that show Turing-completeness or the universal approximation typically rely on encoder models <cit.>. <cit.> study the problem of compositional generalization extensively on benchmark datasets such as SCAN <cit.> and conclude that design changes like relative position encoding <cit.> can improve performance significantly. <cit.> show that Transformers can learn linear algebra operations with carefully chosen encodings. <cit.> use mechanistic interpretability techniques to explain the limited numerical reasoning capabilities of GPT-2.
Beyond Transformers. While we focus our attention on GPT-like models, there is a rich literature studying other sequence-to-sequence models such as recurrent neural networks (RNNs) <cit.>. <cit.> show that RNNs can learn how to execute simple programs with for-loops provided they are trained with curriculum learning. <cit.> show that LSTMs show improved performance on text-based tasks such as translation when the source sentences are reversed, which is closely related to what we observe in addition. <cit.> propose Neural GPUs which outperform prior RNNs on binary arithmetic tasks and even show length generalization they can perform arithmetic on inputs of lengths that were unseen during training. This is yet to be seen even in modern pre-trained models <cit.> and therefore it is interesting to see if we can leverage some of these techniques and apply them to existing modern architectures. <cit.> propose Universal Transformers (UTs) which introduce a recurrent transition function to apply recurrence over revisions of the vector representation at each position as opposed to the different positions in the input. They show that on the tasks from <cit.>, UTs outperform traditional Transformers and RNNs.
Data-centric AI. More recently, there has been increasing interest in Data-Centric AI which emphasizes techniques to improve datasets in order to ensure better performance <cit.>. <cit.> propose a new benchmark where the training code is fixed and the only way to improve performance is to construct new training sets. Several works have also tried to see if the model's reasoning ability can be leveraged to generate explanations and leverage it to solve complicated reasoning tasks <cit.>.
§ PRELIMINARIES AND EXPERIMENTAL SETUP
In this section, we provide a detailed description of our experimental setup, including the model architecture and an overview of the different data formatting and sampling techniques that we employ and evaluate.
Model and Data.
To examine the individual factors at play, we use NanoGPT <cit.>, a lightweight implementation of the GPT family of models, chosen primarily for its feasibility to train from random initialization under numerous settings. NanoGPT features a decoder-only transformer architecture with six self-attention layers, six heads, and an embedding dimension of 384, resulting in approximately 10.6 million parameters. Unless stated otherwise, we use character-level tokenization and absolute position encoding. We train the NanoGPT model from random initialization, which we refer to as training from `scratch', using the conventional next-token prediction objective.
To understand the effect of scale, we extend our experiments to GPT-2 and GPT-3 in Section <ref>.
We investigate teaching arithmetic from scratch as well as fine-tuning using a pretrained GPT-2. However, for GPT-3, we exclusively use supervised fine-tuning on a pretrained model. Refer to Appendix <ref> for a more detailed description.
For arithmetic tasks like addition, subtraction, and multiplication, we define the training dataset for a binary operator f(·) as _train={(a_i, b_i), y_i}_i=1^N where y_i = f(a_i, b_i). For unary operations such as the sine and square root functions, the training dataset is formulated as _train={a_i, y_i}_i=1^N, where y_i = f(a_i). The test dataset _test is constructed by randomly sampling pairs of operands not included in _train.
Throughout training and inference, we apply different data formatting techniques on each data sample from the training dataset, creating the final sequence that serves as the model's input.
Data Formatting.
In the following sections, we will delve into the detailed intuition, and results of the four data formatting approaches that we have deployed in our arithmetic experiments. For this section, we provide a high-level summary of these approaches, each progressively incorporating additional information to form a more comprehensive format. The scratchpad formats are largely adopted from the literature of chain-of-thought (CoT) training <cit.>.
See Figure <ref> and Appendix <ref> for detailed examples.
Different data formatting methods for addition
Four input formatting methods used for the addition task:
(i) Plain: standard formatting of addition
(ii) Reverse: flips the order of the output and encapsulates each data sample with the`$' symbol at the start and end.
(iii) Simplified Scratchpad: provides carry and digit-sum information for each step of addition, from the LSB to the MSB.
(iv) Detailed Scratchpad: provides explicit details of intermediate steps of addition.
[t]0.39
Plain
[language=markdown]
128+367=495
Reverse
[language=markdown]
128+367=594
Simplified Scratchpad
[language=markdown]
Input: 128+367
Target:
A->5 , C->1
A->9 , C->0
A->4 , C->0.
495
[t]0.61
Detailed Scratchpad
[language=markdown, showlines=true,]
Input:
128+367
Target:
<scratch>
[1,2,8] has 3 digits.
[3,6,7] has 3 digits.
[1,2,8] + [3,6,7] , C=0, 8+7+0=15, A->5, C->1
[1,2] + [3, 6] , A= [5], 2+6+1=9 , A->9, C->0
[1] + [3] , A= [9,5] , C=0 , 1+3+0=4 , A->4 , C->0
[] + [] , A= [4,9,5] , C=0 , END
</scratch>
4 9 5
type=figure
figureThe four input formatting methods used for the addition task. We progressively increase the amount of detail with each format.
We deviate from the strict definition of “most significant bit” (MSB) and “least significant bit” (LSB), typically associated with binary numbers, and reinterpret them for the purpose of this paper as the most significant “digit” and least significant “digit”, respectively.
Note that we wrap each data sample in the reverse format with the `$' symbol at the beginning and end as a delimiter. We originally observed improved performance in both the plain and reverse formats when the operands and outputs were zero-padded to a fixed length (3 and 4 digits, respectively, for 3-digit addition). But later realized that a single symbol can effectively replace zero-padding. While we maintain the original plain format without padding as a baseline – emphasizing the necessity for improved data formatting for efficient emergence – we incorporate the `$'-encapsulation in our modified reverse format. For further details, refer to Appendix <ref>.
In Section <ref>, we explore the limitations of the conventional plain-format data and demonstrate how a simple reversal of the output order can lead to substantial performance improvements and enhanced sample efficiency. We introduce two Lemmas to support and explain these findings. Additionally, in Section <ref>, we present results on the simplified and detailed scratchpad formats, highlighting significant enhancements in sample efficiency for learning addition. We also emphasize the importance of carefully designing the intermediate steps in the detailed scratchpad method.
Structured Data Sampling.
While data formatting plays a crucial role, we also discover that selecting the appropriate samples for inclusion in the training data is also essential. When sampling operands for n-digit addition uniformly at random between 1 to 10^n-1, the dataset becomes highly skewed in terms of the number of samples with (i) operands containing a specific number of digits and (ii) operands resulting in a certain number of carry-on[In this paper, we adopt the definition that a carry-on operation involves transferring information from one digit position to another position of higher significance. Therefore, we refer to the “borrow” operation in subtraction as a carry operation.] operations. For instance, in the case of 3-digit addition, random sampling results in a meager 0.01% probability of selecting a 1-digit number. Additionally, 1 or 2 carry-on operations are more likely to occur than 0 or 3. To address this imbalance, we employ a structured sampling approach. Specifically, we aim to (i) balance digits by assigning higher weights to lower-digit numbers during the sampling process as in <cit.> and (ii) balance carry-ons by ensuring an equal distribution of examples with 0, 1, …, n carry-on operations.
r0.46
< g r a p h i c s >
Performance of 3-digit addition on various data sampling methods used: (i) Random: uniform sampling of operands; (ii) Balanced digits: assigning higher sampling weights to operations involving 1 and 2-digit numbers; (iii) Balanced carry: balancing the dataset to contain an equal number of carry-on operations. Experiments on addition with zero-padding both operands and output to have 3 and 4 digits respectively.
When sampling 10,000 examples of 3-digit addition, we include all possible 100 1-digit additions, 900 2-digit samples and 9000 3-digit samples. Note that while the number of samples increase, the fraction of all possible k-digit additions that we sample for k=2, 3 decreases due to the inherent skew.
The split was chosen heuristically to ensure we saw a “reasonable” fraction of all possible k-digit samples for all k.
Similarly, we ensure that the number of samples with 0, 1, 2, or 3 carry-on operations are all approximately 2500.
Figure <ref> reveals the importance of “balancing”. We observe improvements in accuracy across the board while using Balanced data when compared to random sampling. Further, random sampling performs relatively poorly even for the simple task of 2-digit addition. We conjecture that this is due to the fact that the model has not seen enough of these examples. For the remaining experiments, we set the default dataset for addition to be one that has both balanced digits and carry-ons.
§ LEARNING ADDITION IN SMALL MODELS
r0.42
< g r a p h i c s >
Comparison of NanoGPT model performance on the addition task, trained on plain and reverse formatted data. The conventional plain format exhibits suboptimal performance, even with a larger number of addition examples, whereas a distinct phase transition is observed for the reverse format around 2500 train samples where it learns addition perfectly.
We start by examining one of the most basic arithmetic tasks: addition. Initially, we concentrate on the 3-digit addition, where the two operands have at most 3 digits (999). Later, in Section <ref>, we demonstrate that our findings can be applied to larger digits. We assess whether NanoGPT can learn addition from training data of various sizes. As we will soon discover, learning addition may not be as straightforward as one might anticipate.
§.§ Training on Conventional Data
We begin by training NanoGPT on conventional addition data in the form of `𝖠_3𝖠_2𝖠_1+𝖡_3𝖡_1𝖡_1= 𝖢_3𝖢_2𝖢_1', which we denote as the plain data format. However, as shown in Figure <ref>, this leads to fairly poor performance. We believe that this is because the next-token prediction objective is not optimized for generating the most significant digit (MSB) first.
The following lemma clarifies the necessity to access all operand digits in order to output the MSB first:
lemmalemmaOne
Let A and B be two n-digit numbers, and let C=A+B. Suppose an algorithm outputs the digits of C in decreasing order of significance, then must have access to all digits of A and B starting from the first digit that it outputs.
The lemma suggests that to train the model for addition and to output the most significant digit first, it is necessary for the model to learn a global algorithm. Unlike the standard algorithm for addition which consists of computing digit-wise sums and carry-ons, approximating a global algorithm would necessitate learning a more complicated function than necessary. The increased complexity results in decreased accuracy, as observed throughout our experiments. <cit.> refer to this phenomenon as attention glitches.
§.§ Reversing the Output
This leads us to ask, “is it possible to guide the model to learn a simpler algorithm for addition?” We propose an intuitive approach to improve performance by training the model to generate the least significant digit (LSB) first, following the way humans typically perform addition. By starting with the LSB and progressing towards the most significant digit (MSB) from right to left, the model can learn a simpler algorithm that relies on just three inputs: the corresponding digits from the operands and the carry-on information (0 or 1) carried from the LSB to the MSB. This approach offers an advantage over the plain format, where generating the MSB first would necessitate the model to learn a more complex function involving all digits in the two operands.
We propose that using this reverse format (`$𝖠_3𝖠_2𝖠_1+𝖡_3𝖡_1𝖡_1= 𝖢_1𝖢_2𝖢_3$')
is more suitable for next-word prediction models. The rationale behind this is that when generating the sum by starting with the least significant digit (LSB), the model only needs to learn a local function of three inputs per digit – the two relevant digits of the operands and the carry-on from the previous digit. This local operation simplifies the learning function. The following lemma substantiates this idea:
lemmalemmaTwo
There exists an algorithm that computes C=A+B for two n-digit numbers A and B and outputs its digits in increasing order of significance such that, at each position i, the algorithm only requires access to the i^th digits of A and B, as well as the carry-on from the previous position.
Lemma <ref> directly follows from the standard algorithm for addition, which performs the sum and carry-on operations digit by digit. The implications of these two lemmas are evident in our experiments when comparing training NanoGPT on plain and reverse samples. As shown in Figure <ref>, the accuracy of plain addition plateaus at slightly over 85% even with 10,000 samples. In contrast, simply training the model on reversed output significantly enhances the performance. Additionally, we observe that the reverse format requires considerably fewer training data to achieve good performance, further reinforcing that the reverse format's associated function has less complexity than the plain format.
What is particularly remarkable is the occurrence of a notable phase transition between 1000 and 4000 samples for reverse. At this point, the model rapidly transitions from being unable to perform addition to being capable of perfectly adding two 3-digit numbers. This leads to an important question:
Why does addition rapidly emerge as the number of training examples increases?
§ CONNECTION TO LOW-RANK MATRIX COMPLETION
Although the rapid phase transition observed in the previous section may initially seem surprising, closer examination reveals a fascinating equivalence: learning an addition map on n digits from random samples can be considered as completing a rank-2 matrix. This equivalence offers a compelling explanation for the phenomenon we observed.
In this section, we delve into the intricate details of this connection and elucidate how learning the addition map can be formulated as low-rank matrix completion (LRMC). Establishing this connection provides meaningful insights into the observed phenomenon. Further, our investigation goes beyond that and highlights the enhanced capabilities of Transformer models. We demonstrate that Transformers possess expanded capabilities that surpass what traditional LRMC algorithms can do.
§.§ Addition Tables are Rank-2 Matrices
Learning addition from samples can be formulated as a rank-2 Matrix Completion (MC) problem involving an n × n matrix , where the (i,j)-th entry M_i,j represents the output of the addition `i+j'. Such can be decomposed into the sum of two rank-one matrices, ^T + ^T, where is a column vector with entries {1,… n}, and is a vector of n ones. Thus, learning addition from samples can be viewed as solving the MC problem in which only the entries corresponding to those samples are revealed. When the underlying matrix is noiseless and of rank-2, <cit.> demonstrates that a simple iterative algorithm (Algorithm <ref> in Appendix <ref>) is optimal.
As depicted in Figure <ref>, a sharp phase transition occurs at (n). This aligns with Theorem-2 from <cit.> which states that the exact convex relaxation to the MC problem has a unique solution as long as (n) samples are observed.
The sharp phase transition observed in LRMC bears a resemblance to what we notice in NanoGPT. To further investigate this phenomenon, we focus on 2-digit addition (n=100) as shown in Figure <ref>. We evaluate the performance of learning addition through NanoGPT in comparison to LRMC by constructing a training dataset consisting of the matrix's revealed entries in either plain or reverse format. It is important to note that the training dataset is no longer “balanced”, as the revealed entries are randomly and uniformly sampled for the LRMC experiments. The comparison between NanoGPT and LRMC results is presented in Figure <ref>. Remarkably, both NanoGPT and LRMC exhibit a similar phase transition at approximately 1500 samples, where they both start to learn addition almost perfectly. This observation regarding LRMC offers an explanation for the rapid emergence of addition in NanoGPT.
§.§ NanoGPT Generalizes better than Matrix Completion solutions
We noted above that there are some striking similarities between the addition map learned by NanoGPT and LRMC. However, we now delve deeper and find that this map exhibits capabilities beyond LRMC. A well-known limitation of LRMC is its inability to generalize when entire rows or columns are empty. Therefore, we intentionally hide certain numbers in the training dataset or specific digit positions, and examine whether our model can still learn addition.
Generalizing to unseen numbers. In order to further investigate the connection with LRMC, we exclude an increasing fraction of the numbers from the training data and evaluate the model's ability to learn addition. As shown in Table <ref>, the answer to this question is a resounding Yes! The model achieves almost perfect accuracy even when excluding half of all possible 3-digit numbers. More precisely, we randomly choose 100/200/500 numbers and exclude them from the training data. We then evaluate the trained models two metrics: (i) Overall accuracy: which measures the accuracy over a random set of 10,000 examples and (ii) Exclusion accuracy: which measures the accuracy only over the excluded set.
Remarkably, excluding numbers from the training data sometimes leads to improved performance. We conjecture that this may be due to the effect of regularization, similar to random masking or cropping images in vision tasks. Note that these results indicate that the model is not simply performing LRMC. In the LRMC setting, even a single missing number corresponds to an empty row or column, which cannot be recovered. Hence, the ability of the NanoGPT model to generalize to missing numbers signifies its distinct capabilities beyond LRMC.
Generalizing to unseen digits. Building upon the model's robustness to excluded numbers, we further investigate its ability to handle excluded digits.
Intuitively, this should be even more challenging since excluding a digit means the model cannot learn directly how to operate in that position. Instead, it would have to generalize and infer that digits act similarly across all positions. We construct datasets with the number 5 excluded in 1st (LSB), 2nd, and 3rd (MSB) positions, and train separate models on each of these datasets. We compare the resulting models by evaluating overall accuracy on a test set of 10,000 randomly sampled numbers, as well as their accuracy specifically on samples with 5 in each position which we call exclusion accuracy.
The results presented in Table <ref> indicate that the model is not as robust to excluding digits compared to excluding numbers. However, it still achieves more than 66% accuracy on every test and maintains an overall accuracy above 85%. Moreover, it appears that excluding a number in the least significant position yields the worst performance. This can be attributed to the fact that learning addition in this position is transferable to other positions since it is unaffected by carry-on operations. Failing to learn addition in this position, however, will have a detrimental impact on other positions as well.
The distinct learning mechanism of NanoGPT. The phase transition of LRMC offers significant insights into NanoGPT's learning process. Nevertheless, further experiments clearly demonstrate that NanoGPT's mechanism for learning addition is fundamentally different from LRMC. It can successfully learn addition even when numbers or digits are intentionally excluded from the training data, thereby exhibiting generalization capabilities that far exceed that of typical LRMC algorithms.
§ THE POWER OF CHAIN-OF-THOUGHT: INCORPORATING INTERMEDIATE STEPS IN TRAINING DATA
So far, we observed that utilizing the straightforward method of reversing the output can result in remarkable performance, exceeding that of LRMC in learning addition. Nonetheless, it may be possible to expedite the emergence of addition by further enhancing the data format. As addition is a multi-step process, we further explore the idea of incorporating additional information about each step. We adopt a Chain-of-Thought (CoT) style approach, where we guide the model to learn addition step-by-step. In the subsequent sections, we assess the effect of incorporating these intermediate steps on the performance of small models. We demonstrate that this results in a substantial improvement in sample complexity of learning addition and carefully analyze how the level of detail offered for each step impacts the model's performance.
§.§ Training on Chain-of-Thought Data
In the following experiments, we evaluate if training on scratchpad data further improves the learning of addition. As described briefly in Section <ref>, scratchpad data incorporates step-by-step instructions in varying amounts of detail into the samples. This approach aims to help the model learn addition as a compositional function. We explore two levels of detail in the provided instruction steps: Simplified Scratchpad format offers minimal information – the sum and carry information for each digit/step. Detailed Scratchpad provides comprehensive information on how to execute each step in the addition process in natural language. By comparing the performance of the model trained with these different levels of detail, we can analyze its impact on the model's ability to learn addition effectively.
r0.4
< g r a p h i c s >
Comparison of sample efficiency: evaluating performance on training datasets with different numbers of addition samples. While all modified methods (reverse, simplified scratchpad, and detailed scratchpad) achieve 100% test accuracy, they exhibit varying requirements in terms of the number of addition examples in the training dataset to reach optimal performance.
The results presented in Figure <ref> demonstrate the effectiveness of different data formats for training addition.
The model trained on Simplified Scratchpad data achieves 100% accuracy with only 2000 samples, whereas the Reverse format requires more than double the number of samples. Furthermore, the Detailed Scratchpad format, which provides even more detailed information, achieves perfect addition with just 1000 samples.
This indicates that incorporating more information enables the model to learn addition more efficiently, requiring fewer examples.
We conjecture that this is because breaking down the required compositional function to be learned into individual components allows the model to learn a higher-dimensional but easier-to-learn function map.
We note that while CoT-style training enhances sample efficiency, it may not necessarily be the most “token-efficient” approach. We delve into this aspect in more detail in Section <ref>.
In summary, incorporating scratchpad data and decomposing the addition task into steps offer a promising strategy to improve the performance and efficiency of small models in learning addition from scratch.
§.§ The Importance of Intermediate Step Design: Subtraction
In this section, we underscore the significance of meticulously designing the intermediate steps in a Chain-of-Thought manner. Specifically, we focus on the subtraction task and conduct experiments to compare two different versions of the detailed scratchpad for this operation (see examples in Figure <ref>). These trials shed light on the importance of decomposing the subtraction task into simpler intermediate steps. Unlike addition, subtraction behaves differently depending on whether the first operand (a) is greater than the second operand (b) or vice versa.
Detailed scratchpad formatting for different arithmetic tasks
Examples of two variations of detailed scratchpad formatting for subtraction, considering the scenario where the first operand a is greater than the second operand b, and vice versa. In Version 1, a result processing step is included in the final stage to handle negative outputs. In Version 2, the operands are compared at the beginning, and if b is larger, their order is reversed.
Prompt (Case 1. a-b≥0) :Input:
367-128
Target:
[t]0.5
Version 1.
[language=markdown]
...
<scratch>
[3,6,7] has 3 digits.
[1,2,8] has 3 digits.
[3,6,7] - [1,2,8] , A=[] , C=0 , 7-8-0+10=9 , A->9 , C->-1
[3,6] - [1,2] , A=[9] , C=-1 , 6-2-1=3 , A->3 , C->0
[3] - [1] , A=[3,9] , C=0 , 3-1-0=2 , A->2 , C->0
[] - [] , A=[2,3,9]
200+39=239 , END # result processing
</scratch>
2 3 9
[t]0.5
Version 2.
[language=markdown]
...
<scratch>
[3,6,7] has 3 digits.
[1,2,8] has 3 digits.
367>=128 # comparison of two operands
[3,6,7] - [1,2,8] , A=[] , C=0 , 7-8-0+10=9 , A->9 , C->-1
[3,6] - [1,2] , A=[9] , C=-1 , 6-2-1=3 , A->3 , C->0
[3] - [1] , A=[3,9] , C=0 , 3-1-0=2 , A->2 , C->0
[] - [] , A=[2,3,9] , END
</scratch>
2 3 9
Prompt (Case 2. a-b < 0) :Input:
128-367
Target:
[t]0.5
Version 1.
[language=markdown]
...
<scratch>
[1,2,8] has 3 digits.
[3,6,7] has 3 digits.
[1,2,8] - [3,6,7] , A=[] , C=0 , 8-7-0=1 , A->1 , C->0
[1,2] - [3,6] , A=[1] , C=0 , 2-6-0+10=6 , A->6 , C->-1
[1] - [3] , A=[6,1] , C=-1 , 1-3-1=-3 , A->-3 , C->-1
[] - [] , A=[-3,6,1]
-300+61=-239 , END # result processing
</scratch>
-2 3 9
[t]0.5
Version 2.
[language=markdown]
...
<scratch>
[1,2,8] has 3 digits.
[3,6,7] has 3 digits.
128<367 : 128-367=-(367-128) # comparison
[3,6,7] - [1,2,8] , A=[] , C=0 , 7-8-0+10=9 , A->9 , C->-1
[3,6] - [1,2] , A=[9] , C=-1 , 6-2-1=3 , A->3 , C->0
[3] - [1] , A=[3,9] , C=0 , 3-1-0=2 , A->2 , C->0
[] - [] , A=[2,3,9] , END
</scratch>
-2 3 9
type=figure
figureTwo versions of detailed scratchpad formatting for subtraction.
The first strategy (Version 1 in Figure <ref>) involves performing digit-wise subtraction starting from the least significant bit (LSB) and considering borrows when necessary. However, this strategy produces incorrect results when the first operand is smaller than the second operand. In such cases, we subtract the number in the most significant bit (MSB) position multiplied by 10 to the power of (number of digits in the output - 1) from the remaining digits in the output. An example illustrating this approach is shown in Version 1, Case 2.
Alternatively, we can adopt a more familiar strategy. If the first operand is smaller than the second, we swap the operands and compute the negation of the subtraction of the swapped operands: a - b = -(b - a) (referred to as Version 2).
The results in Figure <ref> indicate that Version 2, which involves comparing two operands, performs considerably worse than Version 1. In Version 1, each intermediate step only requires the simpler 1-digit subtraction, along with addition in the final result processing step. Upon analyzing the failure cases of Version 2, we observe that the majority of errors stem from incorrectly identifying which of the two operands is larger, while the intermediate steps are handled correctly. This finding underscores the significance of breaking down arithmetic operations into simpler intermediate steps. Unless otherwise specified, we use Version 1 in all detailed scratchpad experiments.
§.§ The Effect of Noisy Inputs on Accuracy
Noisy intermediate steps in the scratchpad data.
We further investigate the significance of providing accurate intermediate steps in the scratchpad during the training process. While this was inspired by the findings of <cit.>, it is inherently different. <cit.> show that using random labels in ICL demonstrations caused minimal degradation when compared to the gold labels. However, those models were trained on gold labels and then evaluated on multiple downstream tasks. In our setting, the model is trained and evaluated on a single arithmetic task. Further, the final result(or label) is left untouched as the correct answer to the arithmetic operation. We only replace the intermediate steps. The goal of this study is to verify whether the model actually learns to reason using the given intermediate steps or merely uses the scratchpad to improve its expressivity. We compare the performance of training with our simplified scratchpad formatting, which includes accurate A (digit sum) and C (carry) information, with formatting that includes random A, random C, or random A and C for each intermediate step, as depicted in Figure <ref>.
The results in Figure <ref>, demonstrate that the inclusion of noisy labels can impede sample efficiency. However, with enough samples, the model ultimately achieves full accuracy. This suggests that while the model is capable of leveraging the information contained in the intermediate steps, it can also gradually learn how to perform addition while disregarding the presence of noisy intermediate steps.
Model robustness to noise in the auto-regressive output.
In this analysis, we explore the robustness of models trained on plain or reverse formatted data (without noise) when exposed to noise during an auto-regressive generation process. In particular, we aim to unravel how much the learned mapping of the i-th output relies on the operands and preceding tokens in the addition result, given that transformer models generate tokens sequentially in an autoregressive manner, making them prone to error propagation.
For this experiment, we focus on 3-digit addition. We train models on either plain or reverse format data and evaluate the accuracy of next-token predictions when the output sequence contains noise. Specifically, in the plain format setting, we expect a well-performing model to generate the correct output tokens 𝖮_3, 𝖮_2, 𝖮_1 sequentially, where 𝖮_3=𝖢_3, 𝖮_2=𝖢_2, 𝖮_1=𝖢_1, and 𝖢_3𝖢_2𝖢_1 represents the correct answer. We consider two types of perturbation: (i) random perturbation, where we modify the first two output tokens 𝖮_3𝖮_2 to random numbers different from 𝖢_3𝖢_2, and (ii) precise perturbation, where we perturb only the second output token 𝖮_2 by 1. The second case is particularly relevant since a common error case is where the model misses a digit by 1.
We provide the model with an expression of the form “𝖠_3𝖠_2𝖠_1+𝖡_3𝖡_1𝖡_1= 𝖮_3𝖮_2”, where 𝖮_3𝖮_2 can be either (i) a random incorrect number, 𝖮_3𝖮_2 ≠ 𝖢_3𝖢_2, or (ii) 𝖮_2=𝖢_2± 1 10, and observe the next token generated by the model.
A corresponding process is deployed for the reverse format, introducing a noisy sequence to models trained on reverse format data.
To evaluate the performance, we define two accuracy criteria for 𝖮_1: exact accuracy, reckoning 𝖮_1 as accurate only when 𝖮_1=𝖢_1, and relaxed accuracy, considering 𝖮_1 correct if it deviates from the original output 𝖢_1 by at most 1. In other words, 𝖢_1=𝖮_1, 𝖢_1 = 𝖮_1+1 10 or 𝖢_1 = 𝖮_1 - 1 10.
The results presented in Table <ref> reveal intriguing findings. We observe that the reverse format consistently outputs a result that deviates by no more than 1 from the true answer, regardless of whether the preceding outputs 𝖮_3𝖮_2 are subjected to random or precise perturbation. This consistency can be explained by Lemma <ref>, indicating that the reverse format only requires learning a straightforward function of digit-wise addition for each corresponding position, along with the carry-on (0 or 1). Therefore, even with noise in the preceding tokens, the model accurately performs digit-wise addition, albeit with occasional carry-on prediction errors.
With an exact accuracy of 81.26% even in the presence of random perturbation, the reverse format demonstrates the model's ability to rely less on the preceding output tokens, indicating a robust learned output mapping.
On the contrary, models using the plain format have to decipher a more intricate function drawing from all digits within the sequence, as described by Lemma <ref>. Given that in addition, carry operations transition from right to left (least to most significant digit), the introduction of precise perturbation on preceding output tokens, which possess higher significance, has a minor impact on the output (which has less significance). As a result, models trained using the plain format attain an exact accuracy rate of 99.85% and a relaxed accuracy of 100% for cases involving precise perturbation. Interestingly, under purely random perturbation, the plain format struggles, leading to a reduced relaxed accuracy of 61.55% and exact accuracy of 49.88%. This suggests that the output mapping learned by the plain format is not merely a function of the two operands but rather enmeshed in complex dependencies on preceding output tokens.
§ EXTENDING TO LONGER DIGIT ADDITION
In this section, we extend our experiments beyond 3-digit addition and explore longer-digit settings, ranging up to 10 digits. Our aim is to investigate whether our previous findings regarding the sample efficiency of reverse and scratchpad formats hold true for larger numbers of digits.
We begin by observing that the phase transition behavior observed in previous sections also applies to longer-digit addition. Furthermore, we discover that the advantages of using reverse and scratchpad formats become even more pronounced as the number of digits increases.
Next, we examine the number of training samples required to learn k+1 digit addition when fine-tuning a pretrained model trained on k digit addition. We find that while the number of samples needed to further learn k+1 digit addition remains relatively consistent for reverse and scratchpad formats, the plain format requires an increasing number of samples.
Experimental setup and data generation.
To explore the performance of the model in higher-digit addition scenarios, we extend the experimental setup described in Section <ref>. We adopt a balanced sampling approach for training data with D digits, ensuring an equal number d of all combinations of digits for both operands as follows:
We begin by sampling all 100-digit additions. For the remaining number of digits, ranging from 2 to D, we generate addition examples of the form “”. The two operands, and , are randomly sampled d=⌊ (N-100) / (D(D+1)/2 -1 ) ⌋ times for every D, where N is the total number of training examples. Operand A is sampled between [10^k_1-1, 10^k_1 -1] and operand B is sampled between [10^k_2-1, 10^k_2 -1], for all 1 ≤ k_1 ≤ k_2 ≤ D, excluding the case where k_1=k_2=1. After sampling the two operands, we randomly interchange them to cover cases where A has fewer digits than B and vice versa.
§.§ Training from Random Initialization
We repeat the experiment from Section <ref> on nanoGPT with longer digits. The results shown in Figure <ref> demonstrate a similar behavior to the findings observed in Figure <ref> for 3-digit addition. This indicates that our previous observations generalize to longer sequence lengths.
Notably, the performance gap between the modified formats (reverse, simplified scratchpad, and detailed scratchpad) and the plain format becomes even more significant in the context of higher digits. While the plain format requires an increasing number of training examples to learn higher-digit additions, the reverse or scratchpad formats exhibit a more consistent requirement in terms of the number of training examples.
This prompts us to explore the differences between each format in a fine-tuning setting. Specifically, we ask whether a model trained on reverse or scratchpad-formatted k digit addition data would find it easier to learn k+1 digit addition compared to a model trained with plain format addition.
§.§ Fine-Tuning from Pretrained Models
In this section, we investigate the generalization ability of transformer models, specifically focusing on their capacity to learn higher-digit additions based on their knowledge of lower-digit additions. Additionally, we explore how the choice of data format affects the number of samples required to learn higher-digit additions.
Forgetting of k-digit addition when trained on k+1-digit addition.
We begin by fine-tuning a model that was initially trained on 3-digit addition. We fine-tune this model using 4-digit addition training data, with each data format being used separately. To mitigate the “catastrophic forgetting” phenomenon, we experiment with different learning rates, gradually reducing the magnitude. We continue this process until the learning rate becomes too small for the model to effectively learn 4-digit addition.
The results depicted in Figure <ref> reveal interesting insights about the fine-tuning process. When training the model using the plain format with only 4-digit addition data, there is an immediate drop in accuracy for 1 to 3 digit additions. This indicates that the model experiences significant forgetting of previously learned additions.
In contrast, the reverse and scratchpad methods exhibit a more favorable behavior. The model trained with these methods does not completely forget 1 or 2 digit additions while learning 4-digit addition. Remarkably, the detailed scratchpad method stands out by enabling the model to learn 4-digit addition without compromising its performance on 1 to 3 digit additions. Although there is a slight decrease in performance for 3-digit additions initially, the model quickly recovers and picks up the knowledge again as it trains on 4-digit additions.
This result can be explained by the hypothesis that learning a k+1 digit addition from a k-digit model is an incremental process for the detailed scratchpad method. The model already has a solid foundation in understanding the intermediate steps involved in addition, so it only needs to adapt to longer sequences. In contrast, for the plain format, learning higher-digit additions requires the model to establish new mappings to generate correct outputs, which is a more challenging task.
Sample efficiency of fine-tuning k-digit models with k+1-digit examples.
Building upon our previous findings that fine-tuning a model solely on k+1-digit addition leads to a loss in performance for k-digit addition, we modify our approach to prevent the loss of performance in the k-digit addition task. Instead of training solely on k+1-digit examples, we construct a dataset that includes all addition tasks from 1-digit to k+1-digit, with the method described in the previous section. By doing so, we aim to maintain the performance of 1 to k-digit addition while enabling the model to learn k+1-digit addition during fine-tuning.
In this experiment, we investigate the number of k+1-digit training examples required for the model to effectively learn k+1-digit addition when fine-tuning a pretrained model on k-digit addition. It is important to note that this setting differs from the previous section (Section <ref>), where we focused on training models from random initialization. Here, we specifically focus on the fine-tuning process. We fine-tune individual models pretrained on each data format (using k-digit addition) and further train them using the same data format on a new dataset that includes all addition examples from 1-digit to k+1-digit.
The results in Figure <ref> demonstrate the number of k+1-digit addition samples required for a pretrained model capable of performing k-digit addition to learn the addition of k+1 digits. The findings reveal that modified formats (reverse, scratchpad) require a relatively small number of samples (between 1000 and 5000) to learn the addition of an extra digit. In contrast, the plain format necessitates a significantly larger number of training examples, with the requirement increasing as the number of digits grows.
This observation aligns with our previously established Lemma <ref> and Lemma <ref>, which suggest that learning higher-digit addition in the reverse format involves processing the i-th digit of the operands and carrying from the previous position. This operation remains consistent regardless of the number of digits being added. As a result, the model primarily needs to learn how to handle longer digits to perform addition effectively.
In contrast, the plain addition format requires the model to learn a more complex function that incorporates all digits from both operands. As the number of digits increases, the complexity of this function grows as well. This highlights the greater difficulty faced by the plain format in accommodating additions with a larger number of digits.
§.§ Impact of Formats on Fine-Tuning
We delve deeper into the impact of different formats on the fine-tuning process. Specifically, we investigate whether training a model in one format helps in learning addition in another format, and vice versa. To conduct this analysis, we begin with a model trained on each data format using 3-digit addition examples. We then individually fine-tune these pretrained models using different data formats, on 4-digit addition examples.
r0.5
< g r a p h i c s >
Performance of fine-tuning a 3-digit model trained on different data formats (plain, reverse, simple scratchpad, detailed scratchpad, and random initialization) individually with different data formats of 4-digit addition. The results demonstrate that fine-tuning yields the best performance when the pretrained model and the fine-tuning format are consistent. Notably, fine-tuning a detailed scratchpad format model shows suboptimal performance. We hypothesize that this is due to the need for the model to “unlearn” the rigid and verbose format and adapt to the new format.
The results depicted in Figure <ref> highlight some interesting findings. Firstly, we observe that a model trained with the same format as the fine-tuning format exhibits faster learning in terms of the number of iterations. For instance, training a model with the plain format outperforms training a model pretrained with scratchpad formats. This suggests that the model benefits from the consistency and familiarity provided by the same format throughout the training process.
Additionally, we notice that fine-tuning a detailed scratchpad pretrained model on other formats proves to be more challenging. This observation can be attributed to the need for the model to “unlearn” the intricacies of the verbose detailed scratchpad format and adapt to the new format. For example, the plain format does not involve the use of alphabet characters in the data, so a model pretrained with the plain format would have a low probability of generating alphabetic outputs. In contrast, a detailed scratchpad pretrained model would have encountered various alphabets and may have a tendency to output them. Therefore, adjusting to a new format requires additional effort for the model to “unlearn” the patterns specific to the previous format and effectively learn the new format it is being trained on.
These findings highlight the importance of considering format consistency during the fine-tuning process, as it can impact the efficiency and effectiveness of the learning process. We will delve further into this topic in the upcoming section <ref>, where we fine-tune pretrained GPT-3 models. Notably, we observe that fine-tuning with reverse or simplified scratchpad formats actually yields worse results compared to fine-tuning with plain formats. For a detailed exploration of these observations, please refer to the forthcoming section.
§ TEACHING ARITHMETIC OPERATIONS BEYOND ADDITION
While this study has a primary focus on the addition operation and aims to comprehend the significance of data sampling and formatting, its findings are applicable beyond the realm of addition alone. In this section, we expand our examination to include other arithmetic operations, thus demonstrating the broader applicability of our insights.
We consider a mix of arithmetic tasks, including binary operations like subtraction and multiplication, and unary operations such as sine and square root. Each operation entails its unique challenges and intricacies. For instance, subtraction introduces the concept of negative numbers, multiplication can generate significantly longer outputs, and sine and square root functions entail computations involving floating-point numbers, which are considered up to four digits of precision in our work.
We acknowledge that while our examination is detailed, it does not encompass all the fundamental arithmetic operations or the entire scope of floating-point arithmetic. Specifically, our focus is primarily on integer arithmetic for binary operations, considering a limited length of digits. Additionally, for unary operations, we confine ourselves to a restricted number of digits below the decimal point.
In Section <ref>, we delve into each arithmetic operation individually, exploring the impact of data formatting and determining the relevancy of our insights across disparate tasks. Further, in Section <ref>, we perform an analysis of joint training across all five tasks, investigating the potential performance implications for each individual task.
§.§ Extended Arithmetic Operations
In order to extend our analysis to arithmetic operations beyond addition, we consider the following tasks:
Subtraction (-). We consider subtraction of positive numbers up to 3 digits, written as 𝖠_3𝖠_2𝖠_1 - 𝖡_3𝖡_2𝖡_1 = 𝖢_3𝖢_2𝖢_1 in (i) plain formatting, and $𝖠_3𝖠_2𝖠_1 - 𝖡_3𝖡_1𝖡_1 = 𝖢_1𝖢_2𝖢_3$ in (ii) reverse formatting.
As with addition, scratchpad-based methods (iii, iv), present the intermediate steps of digit-wise subtraction and handling of carry-ons
. These steps proceed from the least significant bit (LSB) to the most significant bit (MSB). If the final result after computing all the digit-wise subtractions is negative, we subtract the number in the most significant bit (MSB) position multiplied by 10 to the power of (number of digits in the output - 1) from the remaining digits in the output. In Section <ref>, we present an alternative version of the detailed scratchpad formatting for subtraction.
Multiplication (×). We consider multiplication of positive numbers up to 2-digits. (i) Plain formatting examples are formatted as 𝖠_2𝖠_1 * 𝖡_2𝖡_1 = 𝖢_4𝖢_3𝖢_2𝖢_1, while (ii) reverse formatting is formatted as $𝖠_2𝖠_1 * 𝖡_2𝖡_1 = 𝖢_1𝖢_2𝖢_3𝖢_4$. The (iv) detailed scratchpad method simplifies each intermediate step by conducting a series of multiplications between the first operand and each digit of the second operand, starting from the least significant bit (LSB) and moving toward the most significant bit (MSB). For each step, we multiply the result by an exponentiation of 10 corresponding to the relative digit position.
Sine (sin). We consider decimal numbers within the range [-π/2, π/2], truncated to 4-digit precision. (i) Plain formatting examples are formatted as sin(𝖠_0.𝖠_1𝖠_2𝖠_3𝖠_4)=𝖡_0.𝖡_1𝖡_2𝖡_3𝖡_4. For (iv) detailed scratchpad method, we include the Taylor series expansion steps for sine, which is represented as sin(x) = x - 1/3!x^3 + 1/5!x^5 - 1/7!x^7 + ⋯. These intermediate steps involve exponentiation, which may not be any easier to compute than the sine operation itself.
Square Root (√()). We consider decimal numbers within [1, 10), truncated to 4-digits of precision with the format, written as sqrt(𝖠_0.𝖠_1𝖠_2𝖠_3𝖠_4)=𝖡_0.𝖡_1𝖡_2𝖡_3𝖡_4 for (i) plain formatting. For (iv) detailed scratchpad method, we enumerate each step of Newton's method to compute the square root function. The iterative formula is given by x_n = 1/2 (x_n-1 + x/x_n-1), where x_0 is initialized as the floor of the square root value of the operand x. These intermediate steps involve a division operation, which can be as complex as the square root operation itself.
For evaluation of sine and square root, we classify the result ŷ_̂î as correct if the absolute difference between ŷ_̂î and the ground truth value y_i is less than or equal to a predefined threshold ϵ≥ 0.
For each arithmetic task, we explore both the plain format and the detailed scratchpad format. The detailed scratchpad formatting for each task is illustrated in Figure <ref> and Appendix <ref>. For subtraction, the process involves breaking down the operation into intermediate steps of digit-wise subtraction, including carry-ons when necessary. Unlike addition, subtraction requires an additional step to handle cases where the first operand is smaller than the second. Further details on the detailed scratchpad for subtraction can be found in Section <ref>.
For multiplication, each intermediate step carries out a 2-digit × 1-digit multiplication between the first operand and each separate digit of the second operand.
For sine and square root, we utilize a sequence of iterative approximations instead of algorithmic explanations. Specifically, Taylor's series expansion steps for sine and Newton's method steps for square root are used.
It is important to note that while addition, subtraction, and multiplication are broken down into simpler operations at each step, CoT for sine and square root functions requires intermediate steps involving operations like exponentiation or division, which might not be inherently simpler.
Detailed scratchpad formatting for different arithmetic tasks
Examples of detailed scratchpad formatting for different arithmetic tasks:
(1) Subtraction - includes borrows for intermediate steps, (2) Multiplication - decomposes the second operand for 2-digit × 1-digit multiplication at each step, (3) Sine - utilizes Taylor series expansion, and (4) Square root - employs Newton's method.
[t]0.48
Subtraction
[language=markdown]
Input:
128-367
Target:
<scratch>
[1,2,8] has 3 digits.
[3,6,7] has 3 digits.
[1,2,8] - [3,6,7] , A=[] , C=0 , 8-7-0=1 , A->1 , C->0
[1,2] - [3,6] , A=[1] , C=0 , 2-6-0+10=6 , A->6 , C->-1
[1] - [3] , A=[6,1] , C=-1 , 1-3-1=-3 , A->-3 , C->-1
[] - [] , A=[-3,6,1]
-300+61=-239 , END
</scratch>
-2 3 9
Multiplication
[language=markdown, showlines=true,]
Input:
12*36
Target:
<scratch>
[1,2] has 2 digits.
[3,6] has 2 digits.
[1,2] * 6 , A=[7,2] , k=1 , B=[7,2] , C=0+72=72
[1,2] * 3 , A=[3,6] , k=10 , B=[3,6,0] , C=72+360=432 , END
</scratch>
4 3 2
[t]0.52
Sine
[language=markdown, showlines=true,]
Input:
sin(1.5707)
Target:
<scratch>
x_0=1.5707
x_1: x_0 - 1/3! * (x^3) , x_1=0.9247
x_2: x_1 + 1/5! * (x^5) , x_2=1.0043
x_3: x_2 - 1/7! * (x^7) , x_3=0.9996
x_4: x_3 + 1/9! * (x^9) , x_4=0.9997 , END
</scratch>
0.9997
Sqrt
[language=markdown]
Input:
sqrt(2.7174)
Target:
<scratch>
x_0=1
x_1: 1/2*(1+2.7175/1)=1.8587, x_1=1.8587
x_2: 1/2*(1.8587+2.7175/1.8587)=1.6603, x_2=1.6603
x_3: 1/2*(1.6603+2.7175/1.6603)=1.6485, x_3=1.6485
x_4: 1/2*(1.6485+2.7175/1.6485)=1.6484, x_4=1.6484 , END
</scratch>
0.6484
type=figure
figureExamples of the detailed scratchpad format for different arithmetic tasks such as subtraction, sine, multiplication, and square root.
The results depicted in Figure <ref> indicate that similar to the findings of addition, the detailed scratchpad format significantly improves performance over plain or reverse formats and yields efficient results even with few samples for subtraction and multiplication tasks. Interestingly, we find reverse is not particularly effective in multiplication.
On the other hand, the detailed scratchpad format exhibits reduced efficiency for sin and √() compared to other operations (+,-,×). This discrepancy can be traced back to the complexity of the intermediate steps involved in the detailed scratchpad. While addition, subtraction, and multiplication are decomposed into simpler functions, sine and square root operations involve more intricate operations. For a broader analysis of the error profile, see Appendix <ref>.
§.§ Jointly Training on All Five Arithmetic Tasks
So far, we only considered the problem of learning different arithmetic operations individually. In this section, we study the effect of jointly training on all five arithmetic tasks - addition, subtraction, multiplication, sine, and square root. We construct a single train dataset incorporating all task _train={_train^+,_train^-,_train^×,_train^sin,_train^√()}, and randomize the sequence of tasks in our train samples. For example, a randomly chosen segment of the training data may exhibit a task order such as (+,-,sin.-,×,×,√(),...). We consider 10,000 training examples for each task of addition, subtraction, sine, and square root and 3,000 for multiplication.
The model's performance, after training on our joint dataset _train, is evaluated in both zero-shot and few-shot settings. These results are also compared with the performance of models that were trained separately on each dataset (_train^+,_train^-,_train^×,_train^sin,_train^√()), identical to those used to construct _train. In the few-shot setting, each task is given examples from any of the five arithmetic tasks (not necessarily related to the test task under consideration) or prompt texts, followed by test queries specific to the task of interest. For further details on the few-shot prompting methods used, please refer to Section <ref>.
Table <ref> shows that joint training significantly enhances the zero-shot performance for multiplication and square root tasks, yet it slightly reduces the performance for subtraction. Generally, few-shot prompting exhibits improved performance. Notably, the performance of few-shot prompting remains consistent regardless of whether the exemplars provided are from unrelated tasks or are task-specific. We propose that this consistency is due to our randomized task sequence during training, which presents the model with numerous instances where one task directly follows another, thus simulating few-shot prompting with different tasks. Furthermore, we observe that text prompting performs similar to zero-shot. We conjecture that this is because the training data does not include text data and the model has never encountered text and therefore, text prompting serves as a random prefix attached to our test query.
§ MIXING SHAKESPEARE WITH ARITHMETIC DATA
Until now, our focus was primarily on models trained exclusively on arithmetic tasks. However, in practice, large language models (LLMs) utilize a combination of arithmetic and text data for training. In this section, we broaden our scope by incorporating both addition samples and text into our pretraining data. We then evaluate the trained models with various few-shot prompts to analyze if the model is able to effectively identify the correct context.
Experimental Setup. We mix addition and text data in our experiment using the Shakespeare dataset <cit.> that includes 1,115,394 tokens of text, 10,000 plain addition examples (120,027 tokens), and 3,000 detailed scratchpad formatted addition examples (813,510 tokens). We fix the number of detailed scratchpad examples and plain addition examples (3,000 and 10,000 respectively) while varying the number of each example type in the training process. The Shakespeare text is segmented into dialogue chunks, with a random number of addition data inserted between them.
We use a character-level tokenizer with a vocabulary size of 80, containing all characters present in the dataset, including alphabets, digits, and certain symbols like +,= and \n.
Few-shot prompting.
Given the mixed nature (arithmetic and text) of our dataset, introducing relevant examples seems an effective strategy to prime the model to generate the desired type of output. To assess the performance of such few-shot (1/2/3-shot) prompting, we provide task-specific exemplars as illustrated in Figure <ref>. Plain addition formatted exemplars are used for testing plain addition inputs, while detailed scratchpad formatted exemplars are utilized for assessing performance on detailed scratchpad formatted inputs. Additionally, we experiment with demonstrating text (see Appendix <ref>. for details) before querying addition (which we denote, Text-prompt). For each 1/2/3-shot and text prompting, average performance is reported over a fixed set of exemplars. Standard deviations of these prompts are denoted by shaded areas in the plots. The term “few-shot” refers to the reported mean of all 1/2/3-shot prompting results.
Figure <ref> shows that few-shot prompting directs the enhancement of performance, thereby allowing plain addition to perform almost perfectly with 40,000 train samples. Intriguingly, performance remains high on plain addition even with the inclusion of a text prompt, given a substantial number of addition examples. We hypothesize that this is due to the structure of our mixed dataset where addition examples are interspersed within Shakespeare data. With the incorporation of more addition examples, instances where addition examples directly follow Shakespeare text increase, leading to a decrease in potential inconsistencies when text content is present during addition test queries.
r0.46
< g r a p h i c s >
Performance of NanoGPT model trained exclusively on plain addition, but with an extended vocabulary including both addition and alphabets (vocabulary size = 80). Few-shot prompting, using both correct addition examples (1, 2, 3-shot) and incorrect addition examples (noisy-prompt) leads to enhanced performance, while the use of text prompts results in a degradation of performance when the model is trained solely on addition.
To disentangle the effects of the textual content in the training data, we train a model strictly on plain addition, utilizing an enlarged vocabulary that also includes alphabet characters, thereby enabling text prompting. (Note that previous experimental settings on plain formatted additions used a vocabulary size of 13, which only includes 10 numerals and 3 symbols - “+”,“=”,“\n”). We introduce a variant of few-shot prompting, termed as noisy-prompt, which prompts the model with erroneous addition exemplars, , 𝖠 + 𝖡 = 𝖢, where 𝖢 ≠ 𝖠+𝖡.
Figure <ref> shows that few-shot prompting contributes to performance enhancement even when the model is confined to training on a single plain addition task. Even in the presence of noisy prompting, simply providing the model with the format yields performance nearly identical to few-shot prompting, aligning with the result observed by <cit.>. Conversely, we notice that text prompts negatively influence performance when the model is trained only on addition. This finding reinforces our earlier observation in Figure <ref> that the advantageous impact of text prompts originates from the combined text and addition data.
§ FINE-TUNING, SCALING, AND PRETRAINING IN LARGER MODELS
This section focuses on bridging the gap between our experiments on NanoGPT and the more realistic setting of larger language models like GPT-2 and GPT-3. We begin by comparing the performance of NanoGPT and GPT-2 models when trained from random initialization. This comparison highlights the improved performance achieved with the larger model scale, especially in the zero-shot setting. Subsequently, we delve into the impact of tokenization methods and model pretraining in GPT-2 models. Our exploration reveals the crucial role of pretrained models and the consistent tokenization of numbers (achieved by introducing spaces) during the training phase for arithmetic tasks. Building on these findings, we proceed to fine-tune a pretrained GPT-3 model on various arithmetic tasks, employing different data formats.
Comparing NanoGPT and GPT-2.
To examine the impact of scale on arithmetic performance, we explore a larger GPT-2 model with 85 million parameters, featuring twice as many self-attention layers, heads, and embedding size compared to the previously used NanoGPT model. We train the GPT-2 model from scratch using character-level tokenization, jointly on text and addition tasks, adopting both plain and detailed scratchpad formats; an approach mirroring the setting in Section <ref>. The results depicted in Figure <ref> demonstrate that the larger model outperforms in both plain and detailed scratchpad evaluations. For a comprehensive analysis of GPT-2, including few-shot learning and the influence of text prompts, refer to Figure <ref> and Figure <ref>.
Going from character-level tokenization to BPE.
The transition to a GPT-2 setup necessitates several modifications. Firstly, we shift to OpenAI's Tiktoken BPE tokenizer, which is the default tokenizer for the pretrained GPT-2 model, featuring a vocabulary size of 50,257. We also examined two different training approaches: training the model from random initialization (scratch) and fine-tuning the pretrained model sourced from Huggingface. To ensure uniform digit tokenization, alterations were made in data formatting to include spaces between numbers. This change aims to circumvent potential inconsistent tokenization of numbers while utilizing the Tiktoken tokenizer.
Figure <ref> shows that GPT-2 demonstrates high performance in addition tasks with both character-level tokenization and Tiktoken with spaces between digits. This aligns with the results by <cit.>, suggesting that character-level tokenization exhibits stronger numeracy capabilities compared to a word or sub-word methods. Furthermore, comparing the models trained from scratch and the models trained from the pretrained model, we observe that fine-tuning a pretrained model results in better performance compared to training a model from scratch.
GPT-3 experiments: Supervised fine-tuning.
We extend our experiments to verify if our observations hold while fine-tuning larger pre-trained models. In the following, we consider three GPT-3 variants: Ada, Curie, and Davinci. Note that since we perform fine-tuning using the OpenAI APIs, by default only the completions are loss generating tokens. Therefore, these experiments are slightly different when compared to the previous settings. We fine-tune these models using the same four data formatting methods as our NanoGPT experiments: (i) plain formatting, (ii) reverse formatting, (iii) simplified scratchpad, and (iv) detailed scratchpad. These formats are identical to those from our NanoGPT experiments except for one aspect. We introduce spaces between numbers in plain and reverse formatting to ensure consistent tokenization.
Due to budget constraints, all experiments were conducted using a fine-tuning dataset of 1,000 examples, and models were trained for 4 epochs. Performance evaluation was carried out on 1,000 examples that were disjoint from the training dataset. Note that this training scale is significantly smaller than our experiments on NanoGPT, which employed 10,000 training examples for 5,000 iterations, with evaluations conducted on 10,000 test examples. However, given these models' extensive pretraining on large data corpora, this scale can be deemed rational.
The results for addition and subtraction tasks are presented in Table <ref> and Table <ref>, respectively. We observed that initiating with a pretrained GPT-3 model significantly improves performance compared to training NanoGPT or GPT-2 models from random initialization with only 1000 samples. This indicates the utility of leveraging pretrained models for improved arithmetic performance. Interestingly, while reverse formatting and simplified scratchpad formats improve addition performance, they adversely affect subtraction performance. This observation is consistent with our earlier finding depicted in Figure <ref>, wherein transitioning from one data format to another often results in lower performance compared to initiating training from random initialization. We postulate that this discrepancy may be due to the pretrained GPT-3 model's requirement to adapt to the reversed approach and “unlearn” its knowledge of plain formatting arithmetic, thereby introducing additional complexity. On the other hand, the detailed scratchpad method achieves excellent performance, albeit with increased training and inference costs due to higher token requirements.
For the more complex sine and square root tasks as shown in Table <ref>, we found that training with only 1000 samples is insufficient to generate exact answers (eps=0). The GPT-3 model, fine-tuned with 1,000 samples, performs worse than the NanoGPT model trained with 10,000 samples. Further experiments with larger training datasets are necessary for deeper insights and improved performance on these tasks.
It is worth mentioning that while few-shot prompting notably improves the performance of all three GPT-3 models, their zero-shot performance is quite poor (as shown in the leftmost column of the tables). However, post-training, few-shot prompting becomes less effective as OpenAI's fine-tuning process trains the model on individual prompts and desired completions serially, rather than in concatenation with multiple examples like in our NanoGPT experiments. Consequently, our comparisons primarily focus on the zero-shot performances of each task.
§ TOKEN EFFICIENCY ACROSS DATA FORMATS
r0.425
< g r a p h i c s >
Number of unique tokens required for training addition on NanoGPT using different data formatting methods. The number of unique tokens is calculated by multiplying the number of training samples by the number of tokens per sample. The results demonstrate that the reverse format is the most efficient in terms of token usage for model training, as the scratchpad methods, although more sample-efficient, require more tokens per sample.
Figure <ref> demonstrates that more detailed training data leads to improved sample efficiency. However, this comparison does not account for the cost associated with training and inference. To address this, we conduct a cost analysis based on the number of “unique” tokens encountered during training. Each data sample is treated as a set of unique tokens, and the number of unique tokens is derived by multiplying the number of samples with the tokens per sample. For instance, the mean token count for a single training example in a 3-digit addition task is 13 for plain format, 15 for reverse format, 64 for simplified scratchpad format, and 281 for detailed scratchpad format. Note that this calculation does not evaluate uniqueness of tokens across samples if the first sample is “112 + 129 = 241” and the second sample is “112 + 128 = 240”, we will still consider that the model has seen 26 unique tokens even though only two tokens differ across samples. This approach ensures our cost calculation accounts for a vanilla implementation of attention with no additional optimizations <cit.>. Table <ref> presents the number of tokens required for prompting and completion in each data format, per example. Evidently, the detailed scratchpad method uses considerably more tokens compared to other techniques.
The result in Figure <ref> indicates that reverse formatting is the most token-efficient approach. While detailed scratchpad training is more sample efficient, it necessitates a larger number of tokens per sample, both during training and inference. Given that the inference cost for commercial models is determined by the number of tokens utilized per inference call (sum of prompting and completion tokens), abundant use of models trained on detailed scratchpad formats may escalate overall costs. Furthermore, since the cost of a single forward pass is cubic in the number of tokens, this is important to consider. Therefore, for practical usage, it is crucial to evaluate both the number of samples needed for achieving the desired performance and the actual token demands during training and inference.
§ LENGTH GENERALIZATION
In this section, we present results from experiments conducted to assess the model's ability to generalize across different digit lengths. Initially, we exclude training examples featuring 2-digit operands from the 10,000-sample addition dataset, yielding a reduced dataset of 7,655 samples, consisting solely of 1 or 3-digit operands. The model is trained with reverse format and its performance is evaluated on test dataset containing 100 random samples of 1-digit, 2-digit, 3-digit, and 4-digit additions. The results in Figure <ref> demonstrate that the NanoGPT model is incapable of performing 2-digit and 4-digit additions. This suggests an inherent necessity for exposure to all digit combinations to perform accurate calculations and lacks generalization capabilities for unseen digit lengths.
Additionally, we investigate the model's ability to extrapolate over larger digit lengths. The model is trained on 7-digit plain-formatted additions (each digit addition comprises 16650 samples, except 1-digit addition, which is trained on 100 samples). Its ability to add add 8-digit numbers is then put to test. The results in Figure <ref> show that the model is unable to generalize to a greater number of digits beyond what it has been trained on. Similarly, when training the model on 10-digit binary numbers, it fails to generalize to 11-digit binary additions, further confirming its limited ability to handle unseen digit combinations.
We further explore the impact of detailed scratchpad formatting. The model trained on additions of up to 3 digits, struggles to generalize to 4-digit additions. Notably, it randomly drops a single digit from the 4-digit number, erroneously perceiving it as a 3-digit number. We illustrate this difficulty in Figure <ref> through multiple detailed error cases, ranging from instances in which only the test query is provided (Case 1) to scenarios where all intermediate steps are provided except only the final answer (Case 5). The prompts are highlighted in light grey and the responses generated by our trained NanoGPT model are highlighted in light green. These cases emphasize the model's shortcomings in accurately managing larger digit lengths.
[breakable]Examples for length generalization prompts
Results obtained by prompting the NanoGPT model with larger digits than those it was trained on. The model is trained using detailed scratchpad formats with 3-digit numbers. We evaluate its performance on 4-digit numbers, with varying levels of provided information. The prompt input is highlighted in a light blue box, while the model's output is highlighted in a light green box.
Case 1: Just asking the question
[language=markdown]
Input:
8465+3541
Target:
(*@<scratch>@*)
(*@[8,4,6] has 3 digits.@*)
(*@[3,5,1] has 3 digits.@*)
(*@[8,4,6] + [3,5,1] , A=[] , C=0 , 6+1+0=7 , A->7 , C->0@*)
(*@[8,4] + [3,5] , A=[7] , C=0 , 4+5+0=9 , A->9 , C->0@*)
(*@[8] + [3] , A=[9,7] , C=0 , 8+3+0=11 , A->1 , C->1@*)
(*@[] + [] , A=[1,9,7] C=1 , END@*)
(*@</scratch>@*)
(*@1 1 9 7@*)
Case 2: Giving number of digits
[language=markdown]
Input:
8465+3541
Target:
<scratch>
[8,4,6,5] has 4 digits.
[3,5,4,1] has 4 digits.
(*@[8,4,6] + [3,5,4] , A=[] , C=0 , 6+4+0=10 , A->0 , C->1@*)
(*@[8,4] + [3,5] , A=[0] , C=1 , 4+5+1=10 , A->0 , C->1@*)
(*@[8] + [3] , A=[0,0] , C=1 , 8+3+1=12 , A->2 , C->1@*)
(*@[] + [] , A=[2,0,0] C=1 , END@*)
(*@</scratch>@*)
(*@1 2 0 0@*)
Case 3: Giving 1 intermediate steps
[language=markdown]
Input:
8465+3541
Target:
<scratch>
[8,4,6,5] has 4 digits.
[3,5,4,1] has 4 digits.
[8,4,6,5] + [3,5,4,1] , A=[] , C=0 , 5+1+0=6 , A->6 , C->0
(*@[8,4] + [3,5] , A=[6] , C=0 , 4+5+0=9 , A->9 , C->0@*)
(*@[8] + [3] , A=[9,6] , C=0 , 8+3+0=11 , A->1 , C->1@*)
(*@[] + [] , A=[1,9,6] C=1 , END@*)
(*@</scratch>@*)
(*@1 1 9 6@*)
Case 4: Giving all but one intermediate steps
[language=markdown]
Input:
8465+3541
Target:
<scratch>
[8,4,6,5] has 4 digits.
[3,5,4,1] has 4 digits.
[8,4,6,5] + [3,5,4,1] , A=[] , C=0 , 5+1+0=6 , A->6 , C->0
[8,4,6] + [3,5,4] , A=[6] , C=0 , 6+4+0=10 , A->0 , C->1
[8,4] + [3,5] , A=[0,6] , C=1 , 4+5+1=10 , A->0 , C->1
[8] + [3] , A=[0,0,6] , C=1 , 8+3+1=12 , A->2 , C->1
(*@[] + [] , A=[2,0,6] C=1 END@*)
(*@</scratch>@*)
(*@1 0 0 6@*)
Case 5: Giving all intermediate steps, all it has to do is aggregate the answer
[language=markdown]
Input:
8465+3541
Target:
<scratch>
[8,4,6,5] has 4 digits.
[3,5,4,1] has 4 digits.
[8,4,6,5] + [3,5,4,1] , A=[] , C=0 , 5+1+0=6 , A->6 , C->0
[8,4,6] + [3,5,4] , A=[6] , C=0 , 6+4+0=10 , A->0 , C->1
[8,4] + [3,5] , A=[0,6] , C=1 , 4+5+1=10 , A->0 , C->1
[8] + [3] , A=[0,0,6] , C=1 , 8+3+1=12 , A->2 , C->1
[] + [] , A=[2,0,0,6] C=1 END
</scratch>
(*@1 2 0 6@*)
Case 6: Giving 1 in-context example
[language=markdown]
Input:
8465+3541
Target:
<scratch>
[8,4,6,5] has 4 digits.
[3,5,4,1] has 4 digits.
[8,4,6,5] + [3,5,4,1] , A=[] , C=0 , 5+1+0=6 , A->6 , C->0
[8,4,6] + [3,5,4] , A=[6] , C=0 , 6+4+0=10 , A->0 , C->1
[8,4] + [3,5] , A=[0,6] , C=1 , 4+5+1=10 , A->0 , C->1
[8] + [3] , A=[0,0,6] , C=1 , 8+3+1=12 , A->2 , C->1
[] + [] , A=[2,0,0,6] C=1 , END
</scratch>
1 2 0 0 6
Input:
1946+3598
Target:
(*@<scratch>@*)
(*@[1,9,4] has 3 digits.@*)
(*@[3,5,8] has 3 digits.@*)
(*@[1,9,4] + [3,5,8] , A=[] , C=0 , 4+8+0=12 , A->2 , C->1@*)
(*@[1,9] + [3,5] , A=[2] , C=1 , 9+5+1=15 , A->5 , C->1@*)
(*@[1] + [3] , A=[5,2] , C=1 , 1+3+1=5 , A->5 , C->0@*)
(*@[] + [] , A=[5,5,2] C=0 , END@*)
(*@</scratch>@*)
(*@5 5 2@*)
Case 7: Giving 1 In-context example, and all intermediate steps
[language=markdown]
Input:
8465+3541
Target:
<scratch>
[8,4,6,5] has 4 digits.
[3,5,4,1] has 4 digits.
[8,4,6,5] + [3,5,4,1] , A=[] , C=0 , 5+1+0=6 , A->6 , C->0
[8,4,6] + [3,5,4] , A=[6] , C=0 , 6+4+0=10 , A->0 , C->1
[8,4] + [3,5] , A=[0,6] , C=1 , 4+5+1=10 , A->0 , C->1
[8] + [3] , A=[0,0,6] , C=1 , 8+3+1=12 , A->2 , C->1
[] + [] , A=[2,0,0,6] C=1 , END
</scratch>
1 2 0 0 6
Input:
1946+3598
Target:
<scratch>
[1,9,4,6] has 4 digits.
[3,5,9,8] has 4 digits.
[1,9,4,6] + [3,5,9,8] , A=[] , C=0 , 6+8+0=14 , A->4 , C->1
[1,9,4] + [3,5,9] , A=[4] , C=1 , 4+9+1=14 , A->4 , C->1
[1,9] + [3,5] , A=[4,4] , C=1 , 9+5+1=15 , A->5 , C->1
[1] + [3] , A=[5,4,4] , C=1 , 1+3+1=5 , A->5 , C->0
[] + [] , A=[5,5,4,4] C=0 , END
</scratch>
(*@5 5 4@*)
type=figure
figureExample results on the model's output when prompted with a larger number of digits than those it was trained on.
§ LIMITATIONS
Length generalization. In our experiments, we did not observe any instances where the model could predict beyond the number of digits it had been trained on (see Section <ref>). This finding is consistent with previous literature that suggests length generalization is a challenging task. For instance, <cit.> reported similar difficulties and proposed approaches such as relative positional encodings. <cit.> suggests that models can only perform out-of-distribution tasks by combining fine-tuning, prompting, and scratchpad techniques. Nonetheless, there have been cases where length generalization was observed. <cit.> demonstrated length generalization but only for models with more than 10^8 parameters.
Model/Data scale. Due to the smaller scale of our experiments, we were able to thoroughly examine the impact of individual components on the model's arithmetic learning capabilities. Our model was limited to a GPT-type decoder-only architecture, primarily focusing on character-level tokenization. Although we have obtained some preliminary results on scaling up and incorporating BPE-based tokenization, it remains uncertain if all our findings can be generalized to the scale of LLMs being used in practice today.
Beyond elementary arithmetic. We choose to analyze simple arithmetic operations in order to carefully isolate factors that contribute to emergence. While the existing literature has already demonstrated the emergence of complicated abilities in practice, our work seeks to provide a better understanding of this behavior.
§ CONCLUSION
In this work, we examine the problem of teaching small randomly initialized transformers arithmetic operations and elementary mathematical functions using the next-token prediction objective. We carefully ablate different aspects of the training data so as to isolate the factors that contribute to the emergence of arithmetic capabilities. Our results reveal that traditional training data is sub-optimal for learning arithmetic, and training on detailed, instructive data with intermediate steps or even simply reversing the output improves accuracy and sample complexity. We consider both scenarios with only arithmetic data as well as those with text data, and comprehensively analyze the effect of few-shot prompting, pretraining, and model scale. We find that while detailed, chain-of-thought style data improves sample complexity, it may not be efficient in terms of training and inference costs since it requires training with much more tokens. Furthermore, we find that while the model generalizes to unseen examples of the same number of digits, the problem of length generalization is quite difficult. We attribute this to the model's inability to truly “learn” the underlying arithmetic operation in all generality. It remains an open problem how to curate the training data to ensure that the model learns a particular algorithm as opposed to just learning an approximate function map. It is also unclear what the correct way to learn multiple operations is. It seems plausible that learning them in increasing order of complexity is beneficial if one can circumvent the problem of catastrophic forgetting.
Our findings emphasize the significance of high-quality, instructive data for the emergence of arithmetic capabilities in transformers. We anticipate this research will contribute to a more nuanced understanding of the mechanisms by which transformers acquire arithmetic operations.
icml2023
tocsectionAppendix
PART:
Appendix
§ PROOFS
Here, we present the proofs of Lemma <ref> and <ref>.
*
We begin by assuming for contradiction that there does exist an algorithm that does not have access to all digits of A and B and still outputs C=A+B correctly for all n- digit numbers A, B. Without loss of generality, say does not have access to the k-th digit of A where k ∈ [n] represents the position counting from the least significant digit. Then consider the example B=(10^n - 1) and (A=000… A_k 00 … 0) where B is just the integer with n 9's and A is just 0's with A_k in the kth position. If A_k=0, then C_n+1=0, but if A_k=1, then C_n+1=1. Therefore, without access to the k-th digit of A, there exist examples where the algorithm will surely make a mistake. Therefore, by contradiction such an cannot exist.
*
First note that the trivial algorithm for addition is exactly the proof of this Lemma. However, we present a more formal argument below for completeness. Let A, B, and C be n-digit numbers such that C = A + B. Define the digits of A, B, and C as A_i, B_i, and C_i, respectively, for i ∈ [n] counting from the least significant digit once again. Then, the addition can be performed using the following steps. First, C_i = (A_i + B_i + carry_i)10 where carry_i is the carry-on from the addition of digits at position i-1. If there is no carry from the previous position, then carry_i = 0. The carry for the next position is then calculated as carry_i+1 = ⌊A_i + B_i + carry_i/10⌋.
Putting this together, the algorithm for addition can be described as follows:
Step 1: Set carry_1 = 0. Repeat for i={1, …, n}: {Step 2: Compute C_i = (A_i + B_i + carry_i) 10 and carry_i+1 = ⌊A_i + B_i + carry_i/10⌋, Step 3: Output C_i}.
It is easy to see that this algorithm computes the digits of the sum C correctly and requires only the individual digits at position i and the carry from the previous position. Therefore, this algorithm satisfies the conditions of the lemma.
§ ADDITIONAL EXPERIMENTS
§.§ Zero-Padding and Symbol Wrapping
As discussed briefly in Section <ref>, we found a significant benefit to using padding for multi-digit addition. Throughout our experiments, we use the plain format without any such padding (denoted as “vanilla” below) as the default baseline representing the conventional data format used in training.
Nonetheless, we explore modifications to this plain format to enhance performance; zero-padding, and wrapping with a single symbol. Zero-padding ensures a fixed length for operands and the output. In the case of 3-digit addition, this means 3-digit operands and a 4-digit output. For example, `112+29=141' becomes `112+029=0141'. As shown in Table <ref>. this modification significantly improves model performance.
Next, we wrap each sample using the `$' symbol as in '$112+29=141 $'. We found this performs on par with zero-padding.
As a result, we adopt the `$' symbol for efficient data delimiter, extending its use to the reverse format. Figure <ref> shows `$'-wrapping also enhances the performance of the reverse format. Despite the plain format being improved with the `$' delimiter, it remains short of the reverse format's accuracy and sample efficiency.
We continue to maintain the original plain format as a baseline since it not only exemplifies conventional data but further emphasizes the need for improved data formatting to ensure efficient training. As such, for the reverse format, we have incorporated the `$' delimiter in our formatting modifications.
§.§ Low-Rank Matrix Completion
In our Low-Rank Matrix Completion experiment for the addition matrix (which is of rank-2), we employ an iterative algorithm proposed by <cit.>. This algorithm systematically searches for a 2 × 2 submatrix in which three entries are known and one entry is unknown. It then fills the unknown entry to ensure that the determinant of the 2 × 2 submatrix becomes zero, where the solution is known to be optimal. We present the full pseudo-code in Algorithm <ref>.
To assess the performance of the algorithm, we generate n × n addition matrices for various values of n (e.g., 20, 50, 100, 500). We vary the number of revealed entries, randomly sampling a sparse matrix where only a specified number of entries between n and n × n are known, while the remaining entries are set to zero. We repeat this process 100 times for each number of revealed entries, tracking the algorithm's success or failure in finding the solution. We calculate the average success rate across the trials and present the success probabilities in Figure <ref>, where we observe a sharp phase transition when (n) entries are observed, as expected.
Comment/* */
Continuecontinue
§.§ Prompting with Text
To extend on the few-shot prompting experiments from Section <ref>, we also evaluate the effect of prompting the model with pure-text prompts. If few-shot prompting with addition samples improves accuracy through in-context learning, we expect few-shot prompting with text to hurt accuracy since the text exemplars are out-of-context.
We use five different types of text exemplars: (i) Prompt1: a short text prompt that is not present in the Shakespeare dataset, (ii) Prompt2: a short text prompt extracted from within Shakespeare dataset, (iii) Prompt3: a longer form text prompt extracted from within the Shakespeare dataset, (iv) Prompt4: a prompt that includes numbers, and (v) Prompt5: a long text prompt that is not present in the Shakespeare dataset. More details on the text prompts can be found in Figure <ref>.
[breakable]Text prompts for few-shot experiments
Examples of the different text prompts used in the few-shot experiment. Each exemplar is separated by `---'.
[t]0.45
Prompt 1. Short, ∉ Shakespeare
[language=markdown]
et tu brute
—
hello, world
—
how are you doing?
—
agi is coming
—
boom! stability
Prompt 2. Short, ∈ Shakespeare
[language=markdown]
JULIET:
Romeo!
—
All:
Resolved. resolved.
—
VOLUMNIA:
Why, I pray you?
—
CORIOLANUS:
Nay! prithee, woman,–
—
MENENIUS:
I mean, thy general.
[t]0.55
Prompt 3. Long, ∈ Shakespeare
[language=markdown]
JULIET:
Romeo!
ROMEO:
My dear?
—
MENENIUS:
This is good news:
I will go meet the ladies. This Volumnia
Is worth of consuls, senators, patricians,
—
LADY ANNE:
Foul devil, for God's sake, hence, and trouble us not;
For thou hast made the happy earth thy hell,
Fill'd it with cursing cries and deep exclaims.
—
BUCKINGHAM:
I fear he will.
How now, Catesby, what says your lord?
—
CATESBY:
Bad news, my lord: Ely is fled to Richmond;
And Buckingham, back'd with the hardy Welshmen,
Is in the field, and still his power increaseth.
Prompt 4. Has number, ∉ Shakespeare
[language=markdown]
I go 16-12
That's the code to my heart, ah
I go 1-6-1-2
Star
—
Like a river flows 17-23
Surely to the sea 15-22
Darling, so it goes 46-92
Some things are meant to be
—
I got my first real 6-string
Bought it at the five and dime
Played it 'til my fingers bled
Was the summer of '69
—
I think someday I might just 5-3-2-1 get a real job
I spent half of my life 1-2-3 in a bus or on a flight
I'm getting off 17-36-8-2 the road and in a real job
—
Every time that 27-67-29 I look in the mirror
All these lines on my 1-3-92-5 face getting clearer
The past 45-5-3 is gone
Prompt 5. Long, ∉ Shakespeare
[language=markdown]
Is this the real life? Is this just fantasy? Caught in a landside, no escape from reality.
Open your eyes, look up to the skies and see.
I'm just a poor boy, I need no sympathy. Because I'm easy come, easy go,
Little high, little low,
Any way the wind blows doesn't really matter to me, to me.
—
It's my life
And it's now or never
I ain't gonna live forever
I just want to live while I'm alive
My heart is like an open highway
Like Frankie said, I did it my way
—
Destruction leads to a very rough road but it also breeds creation
And earthquakes are to a girl's guitar, they're just another good vibration
And tidal waves couldn't save the world from Californication
—
I want to stay
But I need to go
I want to be the best for you
But I just don't know what to do
'Cause baby, say I've cried for you
The time we have spent together
Riding through this English whether
—
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Vestibulum mattis in leo vel gravida.
Pellentesque libero elit, scelerisque varius vehicula a, hendrerit et tellus.
Proin convallis neque nisl, nec lobortis est scelerisque tincidunt.
Nunc venenatis auctor urna.
Class aptent taciti sociosqu ad litora torquent per conubia nostra.
type=figure
figureText prompt exemplars for few-shot experiments.
The results presented in Figure <ref> show notable variations in evaluation accuracy for addition, depending on the chosen text prompts. Longer text prompts (Prompt 5) typically result in a more significant decline in performance. With the exception of NanoGPT trained on plain addition, the result in Figure <ref> indicates that employing text prompts followed by test addition queries tends to have an adverse impact on the overall model performance, whereas incorporating relevant few-shot exemplars (1/2/3-shot) is beneficial. This aligns well with our intuition on the benefits on in-context learning.
§.§ Analyzing the results on Sine/Sqrt
Since sine and sqrt are arguably more complicated functions than the remaining arithmetic tasks, we decided to more carefully analyze their performance. As shown in Figure <ref>, sin shows excellent performance across all data formats around sin(x) = 0. We conjecture that this is because sin(x) ≈ x for x ≈ 0, which is easy to learn. We also note that accuracy once again improves close to ±1 potentially for similar reasons.
§ EXPERIMENTAL SETUP
In this section, we summarize the datasets, models and hyperparameters used for experiments. All of our experiments on NanoGPT and GPT-2 models are run using PyTorch 2.1 and CUDA 11.7 on Nvidia 2808 TIs and NVIDIA 3090s. Detailed dependencies are provided on our github repository[<https://github.com/lee-ny/teaching_arithmetic>].
§.§ Dataset
In this section, we explain the details of the datasets used for our experiments. For arithmetic tasks, we construct our own datasets as described below while we use the standard shakespeare <cit.> dataset for text.
Arithmetic Tasks
As mentioned above, for all arithmetic tasks, we prepare our own datasets. We refer to the training dataset for a binary operator f(·) as _train={(x^1_i, x^2_i), y_i}_i=1^N where y_i = f(x^1_i, x^2_i). Similarly, the test dataset _test is constructed by randomly sampling pairs of operands that do not appear in _train.
During both training and inference, we then apply different formatting techniques (see Section <ref>), to construct the final sequence that is input to the model.
We would like to repeat that both the careful choice of samples in the training dataset as well as their formatting play a crucial role in the final performance of the model.
Text
For text data, we use the Shakespeare dataset which was introduced by <cit.> originally featured in the blog post “The Unreasonable Effectiveness of Recurrent Neural Networks”. It consists of 40,000 lines of dialogue carefully curated from William Shakespeare's plays. The dataset comprises of a total of 1,115,394 characters and 64 unique tokens(when using the character-level tokenizer that we employed in all NanoGPT experiments).
§.§.§ Data Balancing
As mentioned in Section <ref>, we carefully sample our data to ensure that they are “balanced” with respect to the number of carries and number of digits. As mentioned earlier, sampling the operands uniformly at random would lead to an extremely skewed dataset. To avoid this, we try to (i) Balance digits by sampling lower-digit numbers with higher weights and (ii) Balance carry-ons by sampling such that we have equal number of examples with 0, 1, 2 and 3 carry-on operations.
Specifically, we create a balanced dataset of 10,000 samples. This dataset includes all 100 1-digit additions and a random sampling of 900 2-digit additions (including both (2+1) and (1+2) digit additions) and 9,000 3-digit additions. For the 3-digit addition samples, we employ rejection sampling to ensure an equal distribution of carry-ons (0, 1, 2, or 3). For the test dataset, we uniformly sample 10,000 addition examples that do not overlap with the train dataset. Results in Figure <ref> and Table <ref> demonstrate a clear advantage of the employed data balancing methods.
For the train dataset, we follow a specific approach based on the number of examples. For sample sizes smaller than 10,000 (500, 1,000, 2,000, 3,000, 4,000, 5,000), we include all 1-digit additions and a proportionate number of 2-digit samples (for a total of 5,000 samples, we include 900 × 5,000/10,000 = 450 two-digit additions). The remaining samples are filled with 3-digit additions from the constructed train dataset of 10,000 samples. For sample sizes larger than 10,000 (20,000, 40,000), we include all examples from the 10,000-sample train dataset and then add additional samples as needed. Similar to before, we perform rejection sampling to maintain an equal number of carry operations. Table <ref>. provides detailed information on the number of samples with 1-digit, 2-digit, and 3-digit additions, as well as the number of carry-ons.
For the other arithmetic operations (subtraction, multiplication, sine, and square root), we construct the train dataset using the following approach:
(i) For subtraction, we use the same pairs of operands that were used for addition.
(ii) For multiplication, we include all 100 cases of a 1-digit number multiplied by a 1-digit number. Additionally, we randomly sample multiplications involving operands of up to 2 digits.
(iii) For sine, we sample a random number in [π/2, π/2] and truncate it to 4 decimal places.
(iv) For square root, we sample a random number between [1, 10] and truncate it to 4 decimal places.
For the test dataset, we sample 10,000 data points (7,000 for multiplication) that do not overlap with the train dataset.
§.§.§ Data Formatting
For each of the four formatting techniques, as applied to each arithmetic operation we provide the details below. (i) Plain refers to the simplest formatting where we simply create a sequence as the mathematical representation of the corresponding operation (𝖠_3𝖠_2𝖠_1+𝖡_3𝖡_1𝖡_1=𝖢_3𝖢_2𝖢_1). For (ii) Reverse, we simply reverse the digits of the output so that they appear in increasing order from LSB to MSB ($𝖠_3𝖠_2𝖠_1+𝖡_3𝖡_1𝖡_1=𝖢_1𝖢_2𝖢_3$). (iii) Simplified Scratchpad and (iv) Detailed Scratchpad provide algorithmic reasoning steps like <cit.> so as to help the model get more “information” per sample. Our intuition is that this approach nudges the model towards actually learning the algorithm of addition or subtraction rather than merely trying to fit the training examples. Refer to Appendix <ref> for detailed examples of data formatting for each arithmetic operation.
Addition We focus on additions of positive numbers up to 3-digits, in which the plain formatting would look like 𝖠_3𝖠_2𝖠_1+𝖡_3𝖡_1𝖡_1=𝖢_3𝖢_2𝖢_1.
For experiments on comparing data sampling presented in Figure <ref>, we pad the two operands and the output with zero, to be of length 3 and 4 respectively. For all other experiments, we do not utilize zero-padding.
For Scratchpad-based methods (iii, iv),
we provide the digit-wise addition (denoted as A) and carry-on (denoted as C) information for intermediate steps from the least significant bit (LSB) to the most significant bit (MSB).
Subtraction We consider subtraction of positive numbers up to 3 digits, written as 𝖠_3𝖠_2𝖠_1 - 𝖡_3𝖡_2𝖡_1 = 𝖢_3𝖢_2𝖢_1 for plain formatting.
As with addition, Scratchpad-based methods (iii, iv), present the intermediate steps of digit-wise subtraction and carry-ons[As explained in Section <ref>, we use the term “carry-on" to refer to the “borrow" operation]. These steps are performed from the least significant bit (LSB) to the most significant bit (MSB). If the final result after computing all the digit-wise subtractions is negative, we subtract the number in the most significant bit (MSB) position multiplied by 10 to the power of (number of digits in the output - 1) from the remaining digits in the output. In Section <ref>, we present an alternative version of the detailed scratchpad formatting for subtraction.
Multiplication We consider multiplication of positive numbers only up to 2-digits. Examples with (i) plain formatting look like: 𝖠_2𝖠_1 * 𝖡_2𝖡_1 = 𝖢_4𝖢_3𝖢_2𝖢_1 while (ii) reverse is formatted as 𝖠_2𝖠_1 * 𝖡_2𝖡_1 = 𝖢_1𝖢_2𝖢_3𝖢_4. For (iv) detailed scratchpad method, we simplify each intermediate step by performing a series of multiplications between the first operand and each digit of the second operand, starting from the least significant bit (LSB) and moving towards the most significant bit (MSB). For each step, we multiply the result by an exponentiation of 10 corresponding to the relative digit position.
Sine We consider decimal numbers in the range of [-π/2, π/2], truncated to 4-digits of precision with (i) plain formatting: sin(𝖠_0.𝖠_1𝖠_2𝖠_3𝖠_4)=𝖡_0.𝖡_1𝖡_2𝖡_3𝖡_4. For (iv) detailed scratchpad, we include the individual steps of the Taylor series expansion for sine, which is represented as sin(x) = x - 1/3!x^3 + 1/5!x^5 - 1/7!x^7 + ⋯. It is important to note that these intermediate steps involve exponentiation, which may not be any easier to compute than the sine operation itself.
Square Root We consider decimal numbers in the range of [1, 10), truncated to 4-digits of precision with the format, with (i) plain formatting: sqrt(𝖠_0.𝖠_1𝖠_2𝖠_3𝖠_4)=𝖡_0.𝖡_1𝖡_2𝖡_3𝖡_4. For (iv) detailed scratchpad, We present each step of Newton's method for computing the square root function. The iterative formula is given by x_n = 1/2 (x_n-1 + x/x_n-1), where x_0 is initialized as the floor of the square root value of the operand x. It is important to note that these intermediate steps involve a division operation, which can be as complex as the square root operation itself.
§.§ Model
For all experiments, we use a Decoder-only Transformer architecture. Specifically, we primarily use the NanoGPT model, a scaled-down variant of the GPT-2 model with half the number of self-attention layers, heads, and embedding dimension. Note that we use character-level tokenization instead of using the OpenAI's BPE tokenizer (Tiktoken) of vocabulary size 50257, making the vocabulary size significantly smaller. We use a learnable absolute positional embedding initialized randomly, following the GPT-2 model. Are results are generated using a temperature of 0.8.
In the case of arithmetic tasks performed on plain and reverse formatting, we set a context length of 256 for NanoGPT experiments. The length of a single train example falls within the range of 13 to 15, approximately. However, when conducting experiments on scratchpad formatting, we increase the context length to 1024. This adjustment allows us to accommodate more examples per batch. In the case of simplified scratchpad, the length of each train example is approximately 64, while the detailed scratchpad has a length of approximately 281. For GPT-2 experiments we fix the context length to 1024 for all experiments. See Table <ref> for details on model configuration.
For experiments on fine-tuning a pretrained large language model, we use OpenAI's GPT-3 model - Ada, Curie, and Davinci.
§.§ Hyperparameter Configurations
In this section, we provide a detailed overview of the hyperparameter configuration used in our experiments in Table <ref> and <ref>. To enhance memory efficiency and training speed, we employ flash attention. For most experiments, we utilize the bfloat16 data type. However, when working with Nvidia 2080 GPUs, which do not support bfloat16, we switch to float16. It is worth noting that we did not observe significant differences in training and evaluation performance between the two data types.
For the GPT-2 experimentation, we reduced the batch size to 8 to accommodate the GPU memory limitations. However, to mitigate the impact of the smaller batch size, we employed gradient accumulation steps. This approach involves taking multiple steps between gradient updates, effectively increasing the effective batch size to 64. For specific hyperparameter details, please refer to Table <ref>.
§ PROMPT EXAMPLES
In this section, we provide three examples of each formatting (plain, reverse, simplified scratchpad, detailed scratchpad) of arithmetic operations (+,-,×,sin,√()).
§.§ Addition
[breakable]Addition Examples
[t]0.35
Plain
[language=markdown]
266+738=(*@1004@*)
980+743=(*@1723@*)
41+34=(*@75@*)
Reverse
[language=markdown]
913+524=(*@1437$@*)226+598=(*@824$@*)
35+58=(*@93$@*)
Simplified Scratchpad[language=markdown]
Input:
922+244
Target:
(*@A->6 , C->0@*)
(*@A->6 , C->0@*)
(*@A->1 , C->1.@*)
(*@1166@*)
Input:
285+43
Target:
(*@A->8 , C->0@*)
(*@A->2 , C->1@*)
(*@A->3 , C->0.@*)
(*@328@*)
Input:
993+849
Target:
(*@A->2 , C->1@*)
(*@A->4 , C->1@*)
(*@A->8 , C->1.@*)
(*@1842@*)
[t]0.65Detailed Scratchpad[language=markdown]
Input:
396+262
Target:
(*@<scratch>@*)
(*@[3,9,6] has 3 digits.@*)
(*@[2,6,2] has 3 digits.@*)
(*@[3,9,6] + [2,6,2] , A=[] , C=0 , 6+2+0=8 , A->8 , C->0@*)
(*@[3,9] + [2,6] , A=[8] , C=0 , 9+6+0=15 , A->5 , C->1@*)
(*@[3] + [2] , A=[5,8] , C=1 , 3+2+1=6 , A->6 , C->0@*)
(*@[] + [] , A=[6,5,8] C=0 , END@*)
(*@</scratch>@*)
(*@6 5 8@*)
Input:
796+890
Target:
(*@<scratch>@*)
(*@[7,9,6] has 3 digits.@*)
(*@[8,9,0] has 3 digits.@*)
(*@[7,9,6] + [8,9,0] , A=[] , C=0 , 6+0+0=6 , A->6 , C->0@*)
(*@[7,9] + [8,9] , A=[6] , C=0 , 9+9+0=18 , A->8 , C->1@*)
(*@[7] + [8] , A=[8,6] , C=1 , 7+8+1=16 , A->6 , C->1@*)
(*@[] + [] , A=[6,8,6] C=1 , END@*)
(*@</scratch>@*)
(*@1 6 8 6@*)
Input:
788+989
Target:
(*@<scratch>@*)
(*@[7,8,8] has 3 digits.@*)
(*@[9,8,9] has 3 digits.@*)
(*@[7,8,8] + [9,8,9] , A=[] , C=0 , 8+9+0=17 , A->7 , C->1@*)
(*@[7,8] + [9,8] , A=[7] , C=1 , 8+8+1=17 , A->7 , C->1@*)
(*@[7] + [9] , A=[7,7] , C=1 , 7+9+1=17 , A->7 , C->1@*)
(*@[] + [] , A=[7,7,7] C=1 , END@*)
(*@</scratch>@*)
(*@1 7 7 7@*)
§.§ Subtraction
[breakable] Subtraction Examples
[t]0.35Plain[language=markdown]
266-738=(*@-472@*)
980-743=(*@237@*)
41-34=(*@7@*)
Reverse[language=markdown]
913-524=(*@983$@*)226-598=(*@273-$@*)
35-58=(*@32-$@*)
Simplified Scratchpad[language=markdown]
Input:
396-262
Target:
(*@A->4 , C->0@*)
(*@A->3 , C->0@*)
(*@A->1 , C->0@*)
(*@100+34=134.@*)
(*@134@*)
Input:
796-890
Target:
(*@A->6 , C->0@*)
(*@A->0 , C->0@*)
(*@A->-1 , C->-1@*)
(*@-100+6=-94.@*)
(*@-94@*)
Input:
788-989
Target:
(*@A->9 , C->-1@*)
(*@A->9 , C->-1@*)
(*@A->-3 , C->-1@*)
(*@-300+99=-201.@*)
(*@-201@*)
[escapeinside=||]markdown
||
[t]0.66Detailed Scratchpad[language=markdown]
Input:
396-262
Target:
(*@<scratch>@*)
(*@[3,9,6] has 3 digits.@*)
(*@[2,6,2] has 3 digits.@*)
(*@[3,9,6] - [2,6,2] , A=[] , C=0 , 6-2-0=4 , A->4 , C->0@*)
(*@[3,9] - [2,6] , A=[4] , C=0 , 9-6-0=3 , A->3 , C->0@*)
(*@[3] - [2] , A=[3,4] , C=0 , 3-2-0=1 , A->1 , C->0@*)
(*@[] - [] , A=[1,3,4]@*)
(*@100+34=134 , END@*)
(*@</scratch>@*)
(*@1 3 4@*)
Input:
796-890
Target:
(*@<scratch>@*)
(*@[7,9,6] has 3 digits.@*)
(*@[8,9,0] has 3 digits.@*)
(*@[7,9,6] - [8,9,0] , A=[] , C=0 , 6-0-0=6 , A->6 , C->0@*)
(*@[7,9] - [8,9] , A=[6] , C=0 , 9-9-0=0 , A->0 , C->0@*)
(*@[7] - [8] , A=[0,6] , C=0 , 7-8-0=-1 , A->-1 , C->-1@*)
(*@[] - [] , A=[-1,0,6]@*)
(*@</scratch>@*)
(*@-9 4@*)
Input:
788-989
Target:
(*@<scratch>@*)
(*@[7,8,8] has 3 digits.@*)
(*@[9,8,9] has 3 digits.@*)
(*@[7,8,8] - [9,8,9] , A=[] , C=0 , 8-9-0+10=9 , A->9 , C->-1@*)
(*@[7,8] - [9,8] , A=[9] , C=-1 , 8-8-1+10=9 , A->9 , C->-1@*)
(*@[7] - [9] , A=[9,9] , C=-1 , 7-9-1=-3 , A->-3 , C->-1@*)
(*@[] - [] , A=[-3,9,9]@*)
(*@-300+99=-201 , END@*)
(*@</scratch>@*)
(*@-2 0 1@*)
§.§ Multiplication
[breakable] Multiplication Examples
[t]0.22Plain[language=markdown]
5*32=(*@160@*)
66*76=(*@5016@*)
67*74=(*@4958@*)
Reverse[language=markdown]
5*32=(*@061$@*)66*76=(*@6105$@*)
67*74=(*@8594$@*)
[t]0.78Detailed Scratchpad[language=markdown]
Input:
22*52
Target:
(*@<scratch>@*)
(*@[2,2] has 2 digits.@*)
(*@[5,2] has 2 digits.@*)
(*@[2,2] * 2 , A=[4,4] , k=1 , B=[4,4] , C=0+44=44@*)
(*@[2,2] * 5 , A=[1,1,0] , k=10 , B=[1,1,0,0] , C=44+1100=1144 , END@*)
(*@</scratch>@*)
(*@1 1 4 4@*)
Input:
8*69
Target:
(*@<scratch>@*)
(*@[8] has 1 digits.@*)
(*@[6,9] has 2 digits.@*)
(*@[8] * 9 , A=[7,2] , k=1 , B=[7,2] , C=0+72=72@*)
(*@[8] * 6 , A=[4,8] , k=10 , B=[4,8,0] , C=72+480=552 , END@*)
(*@</scratch>@*)
(*@5 5 2@*)
Input:
52*34
Target:
(*@<scratch>@*)
(*@[5,2] has 2 digits.@*)
(*@[3,4] has 2 digits.@*)
(*@[5,2] * 4 , A=[2,0,8] , k=1 , B=[2,0,8] , C=0+208=208@*)
(*@[5,2] * 3 , A=[1,5,6] , k=10 , B=[1,5,6,0] , C=208+1560=1768 , END@*)
(*@</scratch>@*)
(*@1 7 6 8@*)
§.§ Sine
[breakable] Sine Examples
[t]0.33Plain[language=markdown]
sin(1.0313)=(*@0.8579@*)
sin(-0.6909)=(*@-0.6373@*)
sin(-0.5719)=(*@-0.5413@*)
[t]0.66Detailed Scratchpad[language=markdown]
Input:
sin(1.0313)
Target:
(*@<scratch>@*)
(*@x_0=1.0313@*)
(*@x_1: x_0 - 1/3! * (x3) , x_1=0.8484@*)
(*@x_2: x_1 + 1/5! * (x5) , x_2=0.8581@*)
(*@x_3: x_2 - 1/7! * (x7) , x_3=0.8578@*)
(*@x_4: x_3 + 1/9! * (x9) , x_4=0.8578 , END@*)
(*@</scratch>@*)
(*@0.8578@*)
Input:
sin(-0.6909)
Target:
(*@<scratch>@*)
(*@x_0=-0.6909@*)
(*@x_1: x_0 - 1/3! * (x3) , x_1=-0.636@*)
(*@x_2: x_1 + 1/5! * (x5) , x_2=-0.6374@*)
(*@x_3: x_2 - 1/7! * (x7) , x_3=-0.6374@*)
(*@x_4: x_3 + 1/9! * (x9) , x_4=-0.6375 , END@*)
(*@</scratch>@*)
(*@-0.6375@*)
Input:
sin(-0.5719)
Target:
(*@<scratch>@*)
(*@x_0=-0.5719@*)
(*@x_1: x_0 - 1/3! * (x3) , x_1=-0.5408@*)
(*@x_2: x_1 + 1/5! * (x5) , x_2=-0.5414@*)
(*@x_3: x_2 - 1/7! * (x7) , x_3=-0.5414@*)
(*@x_4: x_3 + 1/9! * (x9) , x_4=-0.5415 , END@*)
(*@</scratch>@*)
(*@-0.5415@*)
§.§ Square Root
[breakable] Square Root Examples
[t]0.33Plain[language=markdown]
sqrt(7.2726)=(*@2.6967@*)
sqrt(3.6224)=(*@1.9032@*)
sqrt(1.0895)=(*@1.0437@*)
[t]0.66Detailed Scratchpad[language=markdown]
Input:
sqrt(7.1042)
Target:
(*@<scratch>@*)
(*@x_0=2@*)
(*@x_1: 1/2*(2+7.1042/2)=2.776, x_1=2.776@*)
(*@x_2: 1/2*(2.776+7.1042/2.776)=2.6675, x_2=2.6675@*)
(*@x_3: 1/2*(2.6675+7.1042/2.6675)=2.6653, x_3=2.6653@*)
(*@x_4: 1/2*(2.6653+7.1042/2.6653)=2.6653, x_4=2.6653 , END@*)
(*@</scratch>@*)
(*@2.6653@*)
Input:
sqrt(6.2668)
Target:
(*@<scratch>@*)
(*@x_0=2@*)
(*@x_1: 1/2*(2+6.2668/2)=2.5667, x_1=2.5667@*)
(*@x_2: 1/2*(2.5667+6.2668/2.5667)=2.5041, x_2=2.5041@*)
(*@x_3: 1/2*(2.5041+6.2668/2.5041)=2.5033, x_3=2.5033@*)
(*@x_4: 1/2*(2.5033+6.2668/2.5033)=2.5033, x_4=2.5033 , END@*)
(*@</scratch>@*)
(*@2.5033@*)
Input:
sqrt(8.3216)
Target:
(*@<scratch>@*)
(*@x_0=2@*)
(*@x_1: 1/2*(2+8.3216/2)=3.0804, x_1=3.0804@*)
(*@x_2: 1/2*(3.0804+8.3216/3.0804)=2.8909, x_2=2.8909@*)
(*@x_3: 1/2*(2.8909+8.3216/2.8909)=2.8847, x_3=2.8847@*)
(*@x_4: 1/2*(2.8847+8.3216/2.8847)=2.8847, x_4=2.8847 , END@*)
(*@</scratch>@*)
(*@2.8847@*)
§.§ Noisy Simple Scratchpad
We provide one example for each case of adding noise in the simplified scratchpad experiments discussed in Section <ref>.
[breakable] Noisy Simple Scratchpad Examples
We provide one example for each case of adding noise in the simplified scratchpad experiments discussed in Section <ref>. The input prompt is highlighted in light blue, while the remaining part is highlighted in light green. We construct the dataset to have either correct or random digit-sum A and carry information C. For all cases, the final answer remains accurate.
Prompt:[language=markdown]
Input:
686+886
Target:
[t]0.25Correct A & C[language=markdown]
(*@A->2 , C->1@*)
(*@A->7 , C->1@*)
(*@A->5 , C->1.@*)
(*@1572@*)
[t]0.25Random C[language=markdown]
(*@A->2 , C->0@*)
(*@A->7 , C->0@*)
(*@A->5 , C->1.@*)
(*@1572@*)
[t]0.24Random A[language=markdown]
(*@A->0 , C->1@*)
(*@A->9 , C->1@*)
(*@A->9 , C->1.@*)
(*@1572@*)
[t]0.24Random A & C[language=markdown]
(*@A->8 , C->1@*)
(*@A->1 , C->0@*)
(*@A->2 , C->1.@*)
(*@1572@*)
§.§ Example data for GPT-3 fine-tuning
We provide an example from the training dataset consisting of one prompt-completion pair used for fine-tuning the GPT-3 model using OpenAI's API. The prompt is highlighted in light grey, while the completion is highlighted in light green. Note that for plain and reverse formatting, we include spacing between digits to ensure consistent tokenization of numbers. “###” is used as the stop sequence for generation.
§.§.§ Addition
[breakable] Addition Examples
[t]0.35Plain[language=markdown]
6 7 7 + 8 9 8 =(*@1 5 7 5###@*)
Reverse[language=markdown]
7 4 9 + 7 8 5 =(*@ 4 3 5 1###@*)
Simplified Scratchpad[language=markdown]
Input:
32+981
Target:
(*@A->3 , C->0@*)
(*@A->2 , C->1@*)
(*@A->0 , C->1.@*)
(*@1013###@*)
[t]0.66Detailed Scratchpad[language=markdown]
Input:
356+787
Target:
(*@<scratch>@*)
(*@[3,5,6] has 3 digits.@*)
(*@[7,8,7] has 3 digits.@*)
(*@[3,5,6] + [7,8,7] , A=[] , C=0 , 6+7+0=13 , A->3 , C->1@*)
(*@[3,5] + [7,8] , A=[3] , C=1 , 5+8+1=14 , A->4 , C->1@*)
(*@[3] + [7] , A=[4,3] , C=1 , 3+7+1=11 , A->1 , C->1@*)
(*@[] + [] , A=[1,4,3] C=1 , END@*)
(*@</scratch>@*)
(*@1 1 4 3###@*)
§.§.§ Subtraction
[breakable] Subtraction Examples
[t]0.35Plain[language=markdown]
2 0 4 - 5 0 1 =(*@ - 2 9 7###@*)
Reverse[language=markdown]
7 3 4 - 9 6 7 =(*@ 3 3 2 -###@*)
Simplified Scratchpad[language=markdown]
Input:
695-489
Target:
(*@A->6 , C->-1@*)
(*@A->0 , C->0@*)
(*@A->2 , C->0@*)
(*@200+6=206.@*)
(*@206###@*)
[t]0.66Detailed Scratchpad[language=markdown]
Input:
848-367
Target:
(*@<scratch>@*)
(*@[8,4,8] has 3 digits.[3,6,7] has 3 digits.@*)
(*@[8,4,8] - [3,6,7] , A=[] , C=0 , 8-7-0=1 , A->1 , C->0@*)
(*@[8,4] - [3,6] , A=[1] , C=0 , 4-6-0+10=8 , A->8 , C->-1@*)
(*@[8] - [3] , A=[8,1] , C=-1 , 8-3-1=4 , A->4 , C->0@*)
(*@[] - [] , A=[4,8,1]@*)
(*@400+81=481 , END@*)
(*@</scratch>@*)
(*@4 8 1###@*)
§.§.§ Sine
[breakable] Sine Examples
[t]0.35Plain[language=markdown]
sin(-0.8649)
(*@ -0.7611###@*)
[t]0.66Detailed Scratchpad[language=markdown]
Input:
sin(-1.3516)
Target:
(*@x_0=-1.3516@*)
(*@x_1: -1.3516 - 1/3! * (x*x*x) , x_1=-0.9401@*)
(*@x_2: -0.9401 + 1/5! * (x*x*x*x*x) , x_2=-0.9777@*)
(*@x_3: -0.9777 - 1/7! * (x*x*x*x*x*x*x) , x_3=-0.9761@*)
(*@x_4: -0.9761 + 1/9! * (x*x*x*x*x*x*x*x*x) , x_4=-0.9762 , END@*)
(*@</scratch>@*)
(*@-0.9762###@*)
§.§.§ Square Root
[breakable] Square Root Examples
[t]0.35Plain[language=markdown]
sqrt(1.2178)
(*@ 1.1035###@*)
[t]0.66Detailed Scratchpad[language=markdown]
Input:
sqrt(5.5808)
Target:
(*@<scratch>@*)
(*@x_0=2@*)
(*@x_1: 1/2*(2+5.5808/2)=2.3952, x_1=2.3952@*)
(*@x_2: 1/2*(2.3952+5.5808/2.3952)=2.3625, x_2=2.3625@*)
(*@x_3: 1/2*(2.3625+5.5808/2.3625)=2.3623, x_3=2.3623@*)
(*@x_4: 1/2*(2.3623+5.5808/2.3623)=2.3623, x_4=2.3623 , END@*)
(*@</scratch>@*)
(*@2.3623###@*)
|
http://arxiv.org/abs/2307.01510v1
|
20230704064629
|
Vibronic fine structure in the nitrogen 1s photoelectron spectra from Franck-Condon simulations II: Indoles
|
[
"Minrui Wei",
"Lu Zhang",
"Guangjun Tian",
"Weijie Hua"
] |
physics.chem-ph
|
[
"physics.chem-ph",
"physics.atm-clus",
"physics.comp-ph"
] |
justification = raggedright,
singlelinecheck = false
1]Minrui Wei
1]Lu Zhang
2]Guangjun Tian
1]Weijie Hua^∗,
[1]MIIT Key Laboratory of Semiconductor Microstructure and Quantum Sensing, Department of Applied Physics, School of Science, Nanjing University of Science and Technology, 210094 Nanjing, China
[2]
Key Laboratory for Microstructural Material Physics of Hebei Province, School of Science, Yanshan University, 066004 Qinhuangdao, China
[ ]∗ E-mail: [email protected] (W. Hua)
Vibronic fine structure in the nitrogen 1s photoelectron spectra from Franck-Condon simulations II: Indoles
[
===========================================================================================================
The vibronic coupling effect in nitrogen 1s X-ray photoelectron spectra (XPS) was systematically studied for a family of 17 bicyclic indole molecules by combining Franck-Condon simulations (including the Duschinsky rotation effect) and density functional theory. The simulated vibrationally-resolved spectra of 4 molecules agree well with available experiments. Reliable predictions for this family further allowed us to summarize rules for spectral evolution in response to three types of common structural changes (side chain substitution, CH↔N replacement, and isomerization). Interestingly, vibronic properties of amine and imine nitrogen are clearly separated: they show negative and positive ΔZPE (zero-point vibration energy of the core-ionized with respect to the ground state), respectively, indicating flatter and steeper PESs induced by the N 1s ionization; amine N's show stronger mode mixing effects than imine N's; the 1s ionizations on two types of nitrogens led to distinct changes in local bond lengths and angles. The rules are useful for a basic understanding of vibronic coupling in this family, and the precise spectra are useful for future reference and data mining studies.
§ INTRODUCTION
X-ray photoelectron spectroscopy (XPS) is widely used to characterize molecular and material structures nowadays. The core-level binding energy (BE) is both element-selective and sensitive to local bonding. For gas molecules, high-resolution vibrationally-resolved XPS spectra provide richer information beyond binding energies: the fingerprint signatures reflect information about the potential energy surfaces (PESs) of the initial (ground) and final (core-ionized) states, as well as the electronic and nuclear dynamics during the core ionization process.<cit.>
The groundbreaking work of Kai Sieghabn<cit.> since the 1950s has significantly improved the precision of XPS [i.e., electronic spectroscopy for chemical analysis (ESCA) as coined by him]. Since then, a vast amount of XPS data has been collected over the couple of decades. The technique has enabled the characterization of the structures of lots of molecules and materials and helped scientists understand molecular physics, guide the material design, and so on. In such a characterization process, spectral interpretation is an essential final step that translates the detected numerical data into physical and chemical insights. Various experimental databases<cit.> were constructed to assist the interpretation process, with the most famous being the National Institute of Standards and Technology (NIST) database.<cit.>
However, existing experimental XPS databases have two major drawbacks: variations in BEs among different experiments and the lack of profile data. Different experiments of the same compound reported discrepant BEs, sometimes over 1 eV.<cit.> The discrepancy mainly comes from the calibration process which can be highly arbitrary.<cit.> Meanwhile, most databases collect only BE values, without including the spectral profiles. Efforts were also made to build online databases,<cit.> where raw pictures of the original experimental spectra, with both the BE and profile information, were collected, which facilitated the comparison process for various different compounds. However, the available information is still limited, and possible inconsistency in different experiments can impede close analyses.
For these two reasons, constructing a theoretical library for high-resolution XPS is necessary. Data achieved on the same footing provides fair accuracy to analyze the physical rules behind it, especially among structurally-similar systems. Before generating huge data, at the current stage, it is more important to guarantee reliable data. We wish to validate and optimize the simulation procedure and try to investigate general rules for vibronic coupling based on limited molecules with specific structural similarities. Our initial studies have shown great agreement with the experiments by combing the full core hole (FCH) density functional theory (DFT) and the Franck-Condon simulations.<cit.> This motivates us to expand the use of this protocol to wider systems and gain more comprehensive insights into vibronic coupling within a family. Quantitative assessment of the impact of structural changes on molecular properties is meaningful for further understanding the structure-spectroscopy relation, which is insightful for future data mining studies.
Aromatic nitrogen-containing heterocyclic molecules (N-heterocycles)<cit.> serve as an ideal family for testing new theoretical methods,<cit.> which are also important building blocks for biomolecules,<cit.> high energy-density compounds,<cit.> nonlinear optical materials,<cit.> and pharmacologically active compounds.<cit.> In part I<cit.> of this series of papers, our calculations on pyrimidine have shown good agreement with the high-resolution gas-phase experiment,<cit.> accurate vibrationally-resolved N1s XPS spectra for a group of azine molecules were predicted and analyzed to understand the rules for vibronic coupling effects as influenced by consecutive replacement of the CH group with an N atom.
As a continuation, in this work (paper II) we further investigate the indole family. Indole has long been recognized as a star molecule in the field of biochemistry, being a structural motif for numerous drugs and natural products.<cit.> Structurally, azines are six-membered rings, indoles are fused by one six- and one five-membered rings. Besides, we also plan to present our predictions for five-membered ring compounds (where there are less experimental data) in a future separate work. We hope the three papers can construct a complete picture to understand the N1s vibronic fine structure for small aromatic N-heterocycles.
Figure <ref>(a) depicts the 17 selected common indole-derived molecules. Our choice of sample set covers three common types of structural changes: (1) isomerization, (2) side chain substitution, and (3) CH↔N replacement. We chose 6 indole isomers in this study, where three contain imine (=N–) and three contain amine (–N<) nitrogens [Fig. <ref>(b,c)]. This sample set is to examine the structure-spectroscopy relation in response to the local bonding change at the ionization center N^* (N_i^* or N_a^* for imine or amine nitrogen). Significant differences between the two types were found for N1s binding energies, X-ray absorption (XAS) spectra, and resonant inelastic X-ray scattering (RIXS) spectra of small molecules,<cit.> large DNA double strands,<cit.> and two-dimensional material g-C_3N_4.<cit.> We will investigate the XPS fine structures and vibronic properties as related to the two types.
Side chain substitution can push the electrons to or pull the electrons off the indole ring. Two substituted derivatives with available experimental spectra were selected in our study: one -CH_3 (3-methylindole) and one -CHO (3-formylindole) substitute. -CH_3 is a weak electron-donating group (EDG) and -CHO is a moderate electron-withdrawing group (EWG). The two molecules are important precursors for the synthesis of some biologically or pharmacologically active compounds.<cit.>
Meanwhile, 9 molecules were chosen to demonstrate the different degrees (1, 2, 3) of CH↔N replacement. Various azaindoles represent an important class of organic molecules with one N substitution on the indole ring, which are fine chemical intermediates commonly used in medicine,<cit.> biological materials,<cit.> and natural products.<cit.> A total of 5 azaindoles were selected, among which benzimidazole (also named 3-azaindole) is a special one where two nitrogens both locate in the five-membered ring (cf. the rest, one nitrogen on one ring). Besides, 2 molecules each for double (7-azaindazole and pyrazolo[1,5-a]pyrimidine) and triple (purine and adenine) CH↔N replacements were selected. Structurally, an adenine is simply an -NH_2 substituted purine. Adenine is also one of the building block molecules for DNA, responsible for the radiation loss and photolysis properties.
Experimental N1s XPS spectra, to our knowledge, are only available for indole,<cit.> 3-methylindole,<cit.> 3-formylindole,<cit.> and adenine<cit.> among all systems under study. The spectral resolutions are not always high enough to see clear vibronic structures (especially for indole and 3-formylindole<cit.>), but the peak asymmetry is obvious for every molecule, indicating significant influences of the vibronic coupling effects. On the other hand, theoretical XPS studies were done for selected systems, but limited to pure vertical excitations and electronic-only calculations. For example, recently, He et al.<cit.> simulated the N1s XPS spectra of indole at the MP2 level with a relatively large half-width-at-half-maximum (hwhm) of 0.58 eV and achieved a generally similar profile to the experiment. Plekan et al.<cit.> and Wang et al.<cit.> simulated the XPS spectra of adenine by DFT with relatively large shifts in absolute binding energies (-1.32<cit.> and 1.65 eV<cit.>). The goal of this study is to provide high-precision vibrationally-resolved XPS spectra, based on which to carry out systematic analyses (dominant vibronic transitions, active vibrational modes, contributions of 0-n transitions, structural changes induced by core ionization, mode mixing, etc.) and yield general rules on the vibronic coupling.
§ COMPUTATIONAL METHODS
Details were presented in paper I.<cit.> Briefly, all electronic structure calculations were first performed by using Gamess-US,<cit.> where the DFT method with the B3LYP functional<cit.> was employed. Then, Franck-Condon simulations with the inclusion of the Duschinsky rotation (DR)<cit.> effects were used to generate the vibronic fine structures by using the modified<cit.> DynaVib package.<cit.> All ground state (GS) calculations were performed by using the cc-PVTZ basis set.<cit.> A consistent basis set was adopted for the core-ionized state and
a double basis set technique<cit.> was used. Vertical and adiabatic ionic potentials (IPs), and the 0-0 vibrational transition energy were computed respectively via<cit.>
I^vert =
E_FCH|_𝐦𝐢𝐧 𝐆𝐒 - E_GS|_𝐦𝐢𝐧 𝐆𝐒 + δ_rel,
I^ad = E_FCH|_𝐦𝐢𝐧 𝐅𝐂𝐇 - E_GS|_𝐦𝐢𝐧 𝐆𝐒 + δ_rel,
E_00^DR = I^ad + Δε_0.
Here E_GS (E_FCH) denotes the total energy of the GS (FCH) state, and min GS (min FCH) represents the optimized geometry of the GS (FCH) state. Δε_0 is the difference of the zero-point vibrational energies (ZPE) between the FCH (ε_0^FCH) and GS (ε_0^GS) states, i.e.,
Δε_0 = ε_0^FCH - ε_0^GS.
Stick spectra were convoluted by a Lorentzian line shape and slightly different hwhm's were set to better compare with different experiments:<cit.> 0.03 eV, indole, 3-methylindole, and 3-formylindole; 0.05 eV, others. For each molecule with multiple nitrogens, its total spectrum was calculated simply by summing the individual atom-specific contributions. Besides, major stick transitions were analyzed, where thresholds for FCFs, F≥0.02 (indole, 3-methylindole, and 3-formylindole) or F≥0.04 (the rest) were used to filter out weak transitions. All major assignments of each individual molecule are provided in the Supplemental Material<cit.> (Figs. S1–S17; emphasized with colored vertical lines). For those with available experiments (indole,<cit.> 3-methylindole,<cit.> 3-formylindole,<cit.> and adenine<cit.>), theoretical spectra were shifted by 0.25, 0.32, 0.22, and 0.44 eV, respectively, to better compare with the experiments. No ad hoc shift was applied for the rest.
§ RESULTS
§.§ Statistics on vertical and adiabatic IPs
Table <ref> lists the theoretical vertical and adiabatic N1s IPs for the 33 nitrogen sites of all 17 molecules. The data is further visualized in Fig. <ref>, where a clear separation of 15 amine and 18 imine nitrogens is vividly illustrated. All theoretical vertical IPs are in a range of 403.3–407.2 eV spanning over 3.9 eV. All amine N's locate at higher (405.5–407.2 eV) and all imine N's at lower (403.3–405.2 eV) energy regions, with a gap of ca. 0.3 eV. Comparisons were made with available experiments<cit.> for selected molecules, and the deviations of vertical IPs are from -0.4 to +0.1 eV. The deviations are consistent with those found in ΔKohn-Sham calculations of other molecules.<cit.>
We also compared the adiabatic IPs with experiments,<cit.> which show larger deviations (-0.8 to -0.2 eV) than vertical calculations. Adiabatic IPs cover a range of 3.7 eV (403.1–406.8 eV), with the amine and imine N's being in the higher- (405.1–406.8 eV) and lower-energy (403.1–405.0 eV) regions, respectively. Overlap of the two regions appears owing to the vibronic coupling effects.
§.§ Statistics on Δ I
For each N1s ionization, the adiabatic IP is about 0.2–0.5 eV lower than the corresponding vertical IP. By combining Eqs. (<ref>)-(<ref>), their difference can be computed by the total energy change in the FCH-state PES:
Δ I ≡ I^vert - I^ad =
E_FCH|_𝐦𝐢𝐧 𝐆𝐒 - E_FCH|_𝐦𝐢𝐧 𝐅𝐂𝐇.
Here Δ I describes the structural relaxation effect in the excited-state PES caused by the core ionization. This parameter approximately (neglecting the curvature change) provides an estimate of the displacement in PESs. From our calculations, it is interesting to find that Δ I for amine N's (0.3–0.5 eV) are generally larger than imine N's (0.2–0.3 eV). We deduce that displacements in PESs of amine N's are generally larger than imine N's.
§.§ Statistics on structural changes
This deduction is supported by Fig. <ref>, where root-mean-squared distances (RMSDs) between the ground and excited state structures are plotted for each nitrogen ionization. It can be seen that ionizations of amine N's lead to relatively larger RMSDs (0.03–0.10 Å) than imine N's (0.03–0.05 Å). This can be understood that an N=C double bond (as in imines) can be more “rigid” than two N–C (or N–N) single bonds (as in amines) during the N1s ionization process.
Figure <ref> compares the optimized structures of the ground and the core-ionized states for three selected systems indole, purine, and adenine. It is worth noting that the left of Fig. <ref> (and also Fig. <ref>) illustrates only one of the Kekulé structures for each molecule. A conjugated π bond system and resonant Kekulé structures average away the difference of those single and double bonds (especially in the six-membered ring) in the geometrical optimizations. We thus simply use N–C (N^*–C) to denote the bond length in any range between the two atoms at min GS (min FCH). For instance, for N2 in adenine (Fig. <ref>), the two N2–C distances at min GS are equal (1.33 Å). The equality is broken by core ionization, and the resulting N2^*-C bond lengths become 1.36 and 1.33 Å, respectively. Similarly in purine, the two almost equal bond lengths (1.33 and 1.32 Å) in the ground state become 1.37 and 1.31 Å respectively in the core-ionized state. The structural changes induced by N1 ionization in adenine and purine are different. In purine, the two N1–C bond lengths (both 1.33 Å) stay almost unchanged after the ionization. While in adenine, the two N1–C lengths (both 1.34 Å) become 1.37 and 1.34 Å, respectively after the ionization. This is because the -NH_2 substitution in adenine is close to N1 and changes its local environment, which leads to different
structural responses in both molecules to the 1s ionizations.
In the five-membered ring, the two N-C distances at each N site seem to better agree with the Kekulé structures, showing evident one long and one short distance. At N3 (see Fig. <ref>), the two N-C distances are respectively 1.38 and 1.30 Å in purine (1.38 and 1.31 Å in adenine), which are almost unchanged after the N1s ionization. While evident elongation happens at N4, which is 0.07 and 0.08 Å in indole, 0.06 and 0.13 Å in purine, and 0.07 and 0.15 Å in adenine. For the three molecules, all N-H distances are slightly reduced by 0.02 Å from 1.00 to 0.98 Å.
Table <ref> presents more complete data for bond lengths and angles at N^* of all the 17 molecules at min FCH. The N_a^*-H bond lengths stay almost a constant value of 0.98 Å, decreasing by only 0.02 or 0.03 Å from the GS geometry. For the N^*–X distances [X=C (mainly), N (rarely)], amine nitrogens (N_a^*-X, 1.40–1.56 Å) always show longer values than imine nitrogens (N_i^*-X, 1.30–1.43 Å). The change of N^*–X length as compared to the GS geometry is different for amine and imine N's: N_a^*-X is always elongated by 0.04-0.18Å; while N_i^*-X can be either elongated or shortened by 0.00-0.03 Å. Concerning the bond angles ∠C-N^*-X, we found a decrease for all amine N's (∠C-N_a^*-X) and an increase for all imine N's (∠C-N_i^*-X).
§.§ Statistics on ZPE changes (Δε_0)
To estimate the deformation of the excited-state PESs from the ground-state ones as induced by core ionization, we analyzed the Δε_0 values (the zero-point vibrational energies in the core-ionized as referred to the ground states) of all these 33 nonequivalent N sites. As depicted in Fig. <ref>, we found that amine (imine) nitrogens show positive (negative) Δε_0 values, which are about -0.1 to -0.01 (0.00 to +0.05) eV. Our results indicate that N1s ionization leads to a distinct change direction in the curvature of the final-state PES, which becomes flatter (steeper) for amine (imine) N's.
§ DISCUSSION
§.§ Effects of side chain substitutions
§.§.§ -CH_3 and -CHO substitutions on indole
Figure <ref>(a-c) depicts our computed vibrationally-resolved N1s XPS spectra of indole, 3-methylindole, and 3-formylindole compared to experiments.<cit.> Spectra of the three molecules exhibit significant differences. The (small, ca. 0.2 eV) red and (moderate, ca. 0.4 eV) blue shifts in 3-methylindole and 3-formylindole are simply because -CH_3 and -CHO are a (weak) electron-donating and a (moderate) electron-withdrawing groups, respectively.
The experimental spectrum of indole<cit.> shows only an asymmetric broad peak at 405.8 eV. Our theoretical spectrum well reproduced the general profile and further identifies three characteristic peaks at 405.6, 405.8, and 405.9 eV, which arise from (0-0, 0-1), (0-1, 0-2), and 0-2 transitions, respectively [Fig. S1(b)]. The experimental spectrum of 3-methylindole<cit.> resolved two small peaks at 405.6 and 405.7 eV, with a separation of 0.1 eV. Our simulation well reproduced the fine structures, which were interpreted as 0-2 and 0-3 transitions, respectively [Fig. S2(b)]. Concerning 3-formylindole, 0-0 and 0-1 transitions contribute a feature at 406.1 eV, and 0-1, 0-2, and 0-3 transitions contributed to the high-energy region at 406.2–406.4 eV [Fig. S3(b)].
Figure <ref>(d-f) illustrates the active vibrational modes (in solid frames) of each molecule identified according to the threshold of F≥0.02. Two active vibrational modes were identified for each molecule [see also Figs. S1(c)-S3(c)]. For indole (panel d), the two modes are an N-H bending mode ν_3 (288.6 cm^-1) and a ring-deformation mode ν_22 (1078.1 cm^-1). Both modes still can be tracked in the other two substituents, but they do not always hold to be the most active modes. For instance, ν_26 of the -CH3 substituent (panel e) and ν_5 of the -CHO substituent (panel f) give FCFs of only 0.01 and 0.003, respectively (less than our threshold). Meanwhile, new active modes appear in the two substituents (compared to those in indole), including another type of N-H bending mode ν_5 (309.8 cm^-1) in the -CH3 substituent (panel e) and a combined ring deformation and C=O bending mode ν_2 (156.1 cm^-1) in the -CHO substituent (panel f), both with FCFs of 0.02. The results show that side chain substitution can effectively change the active modes, providing give sensitive isomer-dependent signatures.
Despite the difference in active modes, the local bond lengths/angles (at N^*) of the three molecules in their FCH states are very similar (Table <ref>). The N^*-C lengths are between 1.45–1.47 Å. The angles ∠C-N^*-C fall between 107.3–108.0^∘.
§.§.§ -NH_2 substitution on purine
Figure <ref>(a) displays the simulated vibrationally-resolved XPS spectrum of adenine compared with the experiment.<cit.> The experimental spectrum featured three regions I–III centered at ca. 404.4, 405.7, and 406.7 eV, respectively. Our simulation agrees well with the experimental fine structure. We interpreted region I by imine nitrogens N1, N2, and N3, and regions II and III by amine nitrogens N5 (0-1, 0-2 transitions) and N4 (0-3 transition), respectively [Fig. (b-f)]. In region I, three vibronic structures were reproduced. The lowest fine structure (404.5 eV) comes from the 0-1 and 0-2 contributions of N1 and N2 [Fig. (b,c)], and the next two (404.6 and 404.7 eV) being respectively the 0-0 and 0-1 transitions of N3 [Fig. (d)]. According to our threshold (F≥0.04), only N3 contains stick vibronic transitions with large intensities [Fig. (a)]. Two active modes ν_18 (965.4 cm^-1) and ν_22 (1164.4 cm^-1) are identified, which are both ring deformation modes localized mainly in the five-membered ring [Fig. (g)].
Figure <ref>(b) shows the simulated spectrum of purine. Similar to adenine, imine (N1–N3) and amine (N4) nitrogens contribute to the two regions I (ca. 404.6 eV) and II (ca. 406.5 eV), respectively. In region I, the three fine structures at 404.5, 404.6, and 404.7 eV come from mixed vibronic contributions by these three atoms [see also Fig. (a)]. In region II, the doublet fine structures with nearly equal intensities (at 406.4 and 406.5 eV) are respectively assigned as 0-2 and 0-3 as well as 0-3 and 0-4 transitions of N4 [Fig. (e)]. For each nitrogen, some 1-3 active modes are identified, which are all low-frequency (330.2–1075.8 cm^-1), in-plane ring deformation modes [Fig. (f)].
Purine and adenine differ by an -NH_2 group. -NH_2 is a strong EDG, which leads to smaller (0.3–0.8 eV) vertical IP values for N1–N4 in adenine than in purine (Table <ref>). Our calculations show that in the FCH state, the bond lengths at each of the same N^* sites are similar (Table <ref>). The N^*-C lengths in each molecule deviate by 0.01-0.03 Å compared to the corresponding GS geometry. The differences in bond angles (∠C-N_a^*-C and ∠C-N_i^*-C) are 0.1–4.3^∘.
§.§ Effects of isomerization
Figure <ref>(a) depicts computed N1s XPS spectra of six indole isomers. Above we have discussed the separation of binding energies for imine and amine N's. Concerning the vibronic profiles, they also distinguish well from each other. For each isomer, the sum of FCFs converges (a threshold of 0.99 was used) at the same n = 7 [Figs. S1(b), S11(b)–S15(b)]. The isomer-dependent signatures come from different modulations of each 0-n transition. For each isomer, a lower-energy 0-0 peak always exists, which is well separated from the broad peak in the higher-energy part. The broad peak is assigned to be mainly 0-1 transitions (3H-indole) or a combination of 0-1 and 0-2 transitions (indole, 1H-isoindole, 2H-indole, 2H-isoindole, and indolizine). Decomposition analyses also show that for each molecule, 0-1 transitions always make the largest contribution among all 0-n transitions. This indicates a relatively small displacement of the PESs as induced by core ionization, owing to the stiffness of the bicyclic molecules. Similar profile shapes were observed in monocyclic molecules.<cit.> Interpretation of major stick transitions of each isomer indicated that only a few (1–3), low-frequency (288.6–1507.4 cm^-1), nearly in-plane, ring-deformation modes are active [Figs. S1(c), S11(c)-S15(c)].
§.§ Effects of mode mixing
The Duschinsky rotation matrix 𝐉 carries information on the strengths of mode mixing. The matrix elements of each indole isomer are visualized in Fig. <ref>(b). Note that the range of each entry J_ij is from -1 to 1. The absolute value of the element (|J_ij|) that is close to 0 and 1 mean respectively weak or strong mixing effect for the corresponding mode pairs i and j). It is noted that in the literature, absolute<cit.> or squared<cit.> values are also often used (thus gives a range from 0 to 1). The three presentation methods are consistent, which will not influence our discussion.
As visualized in Fig. <ref>(b), elements with the largest large absolute values are always at or near the diagonal. As we know, core ionization can lead to interchanges of the mode order. It is interesting to find that for iv-vi (with amine N's) the corresponding matrix elements distribute farther off the diagonal than i-iii (with imine N's). This indicates a stronger mode mixing effect in amine than imine N's. In the three molecules iv-vi with amine nitrogens, the mode mixing happens between modes No. 25-35 (1178.5-1426.9 cm^-1), corresponding to nearly in-plane, ring-deformation vibrations.
Analyses of the Duschinsky matrices for other molecules are provided in Figs. S18-S20, where the same conclusion for imine and amine nitrogens generally applies.
§.§ Effects of CH↔N replacement
Figure <ref> shows the spectra of five azaindole isomers, each with one amine nitrogen N1 and one imine nitrogen N2. N1 is always at a fixed position, while N2 replaces the -CH group at different sites. The effects of isomerization have been discussed above for indoles (Section <ref>). Here the CH↔N replacement is a special kind of isomerization. Besides this effect, another theme here is to analyze the effects of N1-N2 interactions on the spectra, as discussed below (Section <ref>).
For each isomer, the two peaks from N1 and N2 are clearly separated. Responding to the position change, the N2 peak varies within an energy range of 0.3 eV between 403.8 and 404.1 eV, and its fine structure shows a clear isomer-dependent signature. Each molecule is identified to have 2-3 active vibrational modes. They are all in-plane modes within a frequency range of 246.5–1576.2 cm^-1 [Figs. S6(d)-S10(d)]. For m-azaindole (m=4, 5, 6, 7), all active modes of arise from N2. Structurally, benzimidazole is the only molecule that contains both nitrogens in the five-membered pyrrole ring. It is interesting to find that the active modes of benzimidazole are ring deformation vibrations localized only on the five-membered ring [Fig. S6(d)]. For other isomers, the actives are more delocalized [Figs. S7(d)-S10(d)].
Besides, the FCH-optimized structure of benzimidazole is clearly distinguished from the other four isomers (Table <ref>). The N1^*–C bond length of benzimidazole (1.55 Å) is significantly greater than that of the four m-azaindole isomers (1.45-1.47 Å). The same conclusion applies for N2: the N2^*–C distance in benzimidazole is 1.39 Å, while in the rest isomers, the length is 1.33–1.34 Å. Similarly, we also find that the bond angle ∠C–N_a^*–C (∠C-N_i^*–C) for the amine N1 (imine N2) is 107.6–108.0^∘ (121.4–124.6^∘) for the four m-azaindole isomers. While for benzimidazole, the value is much smaller: 103.9 (108.3^∘). Results indicated that the influence of CH↔N replacement within a five- or a six-membered ring is evidently different.
§.§ Effects of N-N interactions
Although N1 is always fixed in each azaindole isomer, it is interesting to find that the energy position of the N1 peak also varies evidently by 0.4 eV (from 405.8 to 406.2 eV), a value comparable to that of N2. The corresponding spectral profile of the N1 peak in each molecule also distinguishes well from each other. The change is attributed to the structural perturbation caused by N2. This reflects the interaction between the two nitrogens, which varies by molecules. The atom-atom interaction within a molecule was theoretically investigated and compared in p-, m-, o-aminophenols with para-, meta-, and ortho-positions of the -NH_2 and -OH groups by core excitations.<cit.> Here the two nitrogens are connected via the delocalized π orbital since alternating single and double bonds create a conjugated π bond system across multiple atoms.
From another point of view, the interaction between the two nitrogens can be read by comparing the spectra of 5- and 6-azaindole (iii and iv). In these two molecules, N1 and N2 have the largest spatial separation and consequently, the weakest interaction (similar to the role of p-aminophenol among all aminophenols<cit.>), the resulting spectra (both the energies and profiles) are close to each other.
§.§ Limitation of the FCH-DR approach
Another extreme for the nitrogen-nitrogen interaction is when they bond together (the strongest interaction). When the two N atoms bond directly to one another in the five-membered ring, it gives another isomer, 1H-indazole. We tried to simulate its spectrum with the same protocol, but the optimized FCH-state geometry deviates much from the GS one (a much larger N^*-N distance). As a result, the consequent Franck-Condon calculations failed to converge. The existence of the N^*-N motif seems to be a difficult case (though it does not always fail) for our DFT optimizations within the FCH approximation.
Critically, the same problem was also encountered in paper I<cit.> for 1,2,3-triazine and 1,2,3,4- and 1,2,3,5-tetrazine as well as in Ref. <cit.> for N_5H (these molecules were abandoned in the final publications due to the failure). The essential problem lies in locating the minima of the core-ionized isomer (min FCH), while a single-point (vertical) FCH calculation at min GS usually works. This may be the limitation of the FCH-DR method when there is a tendency to significantly elongate the bond length at the ionization center. An approximate solution is to use the linear coupling model (LCM)<cit.> instead of the Duschinsky rotation, since the former calculation can avoid locating the structure of min FCH. For a more accurate solution within the DR framework, one shall try high-level electronic structure methods for such difficult cases, such as the multiconfigurational methods.<cit.>
§ SUMMARY AND CONCLUSIONS
In summary, we have computed the vibrationally-resolved XPS spectra for a family of 17 indole-based bicyclic molecules at the DFT level to investigate the structure-spectroscopy relation and to yield general vibronic coupling rules of this family. Structural variations cover common side chain substitutions, isomerizations, CH↔N replacements. Vibronic coupling was considered by the Franck-Condon approach with the inclusion of the Duschinsky rotation effect. Good agreement with available experiments was achieved both in absolute binding energies and fine structures, which validated reliable analyses. Binding energies, major vibronic transitions, corresponding active vibrational modes, and core ionization-induced local geometrical changes at N^* were analyzed for each molecule. Our study in this work provides reliable and complete spectral data for future experimental and theoretical references and machine learning studies, and the rules summarized are insightful for a better understanding of the basic physical concept of vibronic coupling in the XPS process.
In this work, not only the clear separation of binding energies of amine (larger) and imine (smaller) nitrogens, as reported in several early literatures,<cit.> were confirmed, but more importantly, it is interesting to find the same separation rules also for vibronic properties. (1) The Δε_0 value (zero-point vibrational energy of the FCH state as compared to the ground state) is positive for all amine N's and negative for all imine N's. This indicates that the change in the curvature of the excited-state PES induced by core ionization depends on the local structure at the N^* center, which can be steeper (for amine N's) or smoother (for imine N's). (2) Amine nitrogens exhibit stronger mode mixing effects than imine ones. (3) 1s ionization at amine N's always leads to an elongation of N^*-C and shortening of the N^*-H bond lengths; while for imine N's, the structural changes in N^*-C are much smaller and both elongation and shortening exist. (4) Ionization at amine (imine) nitrogens always leads to a decrease (increase) in the bond angle ∠C-N^*-C. (5) 1s ionization in an amine N leads to a larger global structural change (indicated by the RMSD between min GS and min FCH) than an imine N.
§ ACKNOWLEDGMENTS
Financial support from the National Natural Science Foundation of China (Grant No. 12274229) and the Postgraduate Research & Practice Innovation Program of Jiangsu Province (Grant Nos. KYCX22_0425) is greatly acknowledged.
10
url<#>1#1urlprefixURL
hergenhahn_vibrational_2004
Hergenhahn U 2004 J. Phys. B: At. Mol. Opt. Phys. 37 R89–R135
ISSN 0953-4075, 1361-6455
mendolicchio_theory_2019
Mendolicchio M, Baiardi A, Fronzoni G, Stener M, Grazioli C, De Simone M and
Barone V 2019 J. Chem. Phys. 151 124105 ISSN 0021-9606,
1089-7690
plekan_investigation_2022
Plekan O, Grazioli C, Coreno M, Di Fraia M, Prince K C, Richter R and Ponzi A
2022 Chem. Phys. 562 111657 ISSN 03010104
plekan_theoretical_2008
Plekan O, Feyer V, Richter R, Coreno M, de Simone M, Prince K, Trofimov A,
Gromov E, Zaytseva I and Schirmer J 2008 Chem. Phys. 347
360–375 ISSN 03010104 and Fig. 11 therein, with the experimental N1s XPS
spectrum of adenine in the gas phase, has been recaptured in our Fig. 5
myrseth_vibrational_2002
Myrseth V, Børve K J, Wiesner K, Bässler M, Svensson S and Sæthre
L J 2002 Phys. Chem. Chem. Phys. 4 5937–5943 ISSN 1463-9076,
1463-9084
fronzoni_vibrationally_2014
Fronzoni G, Baseggio O, Stener M, Hua W, Tian G, Luo Y, Apicella B, Alfé M,
de Simone M, Kivimäki A and Coreno M 2014 J. Chem. Phys. 141 044313 ISSN 0021-9606, 1089-7690
sankari_vibrationally_2003
Sankari R, Ehara M, Nakatsuji H, Senba Y, Hosokawa K, Yoshida H, De Fanis A,
Tamenori Y, Aksela S and Ueda K 2003 Chem. Phys. Lett. 380
647–653 ISSN 00092614
siegbahn1982electron
Siegbahn K 1982 Rev. Mod. Phys. 54 709
siegbahn1967atomic
Siegbahn K, Nordling C, Fahlman A, Nordberg R, Hamrin K, Hedman J, Johansson G,
Bergmark T, Karlsson S, Lindgren I et al. 1967 ESCA: atomic,
molecular and solid state structure studies by means of electron
spectroscopy (Almqvist & Wiksell, Uppsala, Sweden)
siegbahn1974esca
Siegbahn K, Allison D A and Allison J H 1974 ESCA-photoelectron
spectroscopy (CRC Press, Cleveland, Ohio)
Rumble1992NIST
Rumble Jr J R, Bickham D M and Powell C J 1992 Surf. Interface Anal.
19 241–246
wagner_1979_handbook
Wagner C D 1979 Handbook of x-ray photoelectron spectroscopy: a reference
book of standard data for use in x-ray photoelectron spectroscopy
(Perkin-Elmer)
jolly_core-electron_1984
Jolly W, Bomben K and Eyermann C 1984 At. Data Nucl. Data Tables 31 433–493 ISSN 0092640X
xpsdatabase
The International XPS Database, https://xpsdatabase.com/, Accessed on
2023-7-4.
sasj
Http://www.sasj.jp/, Accessed on 2023-7-4.
LaSurface
Http://www.lasurface.com/xps/index.php/, Accessed on 2023-7-4.
wei_vibronic_2022
Wei M, Cheng X, Zhang L, Zhang J R, Wang S Y, Ge G, Tian G and Hua W 2022 Phys. Rev. A 106 022811 ISSN 2469-9926, 2469-9934
greczynski_x-ray_2020
Greczynski G and Hultman L 2020 Prog. Mater. Sci. 107 100591
hua_theoretical_2020
Hua W, Tian G and Luo Y 2020 Phys. Chem. Chem. Phys. 22
20014–20026 ISSN 1463-9076, 1463-9084
cheng_vibrationally-resolved_2022
Cheng X, Wei M, Tian G, Luo Y and Hua W 2022 J. Phys. Chem. A 126
5582–5593 ISSN 1089-5639, 1520-5215
joule2020heterocyclic
Joule J A 2020 Heterocyclic chemistry (CRC Press)
damour_accurate_2021
Damour Y, Véril M, Kossoski F, Caffarel M, Jacquemin D, Scemama A and Loos
P F 2021 J. Chem. Phys. 155 134104 ISSN 0021-9606, 1089-7690
da_silva_retro-3_2008
da Silva G and Bozzelli J W 2008 J. Org. Chem. 73 1343–1353 ISSN
0022-3263, 1520-6904
gutowski_accurate_2006
Gutowski K E, Rogers R D and Dixon D A 2006 J. Phys. Chem. A 110
11890–11897 ISSN 1089-5639, 1520-5215
verevkin_rediscovering_2012
Verevkin S P, Emel’yanenko V N, Notario R, Roux M V, Chickos
J S and Liebman J F 2012 J. Phys. Chem. Lett. 3 3454–3459 ISSN
1948-7185, 1948-7185
kerru_review_2020
Kerru N, Gummidi L, Maddila S, Gangu K K and Jonnalagadda S B 2020 Molecules 25 1909 ISSN 1420-3049
zheng_self-assembly_2021
Zheng Y, Qi X, Chen S, Song S, Zhang Y, Wang K and Zhang Q 2021 ACS Appl.
Mater. Interfaces 13 28390–28397 ISSN 1944-8244, 1944-8252
castro_design_2012
Castro M R, Schellenberg P, Belsley M, Fonseca A C, Fernandes S S and Raposo
M M 2012 Dyes Pigm. 95 392–399 ISSN 01437208
mermer_recent_2021
Mermer A, Keles T and Sirin Y 2021 Bioorg. Chem. 114 105076 ISSN
00452068
bolognesi_pyrimidine_2010
Bolognesi P, O’Keeffe P, Ovcharenko Y, Coreno M, Avaldi L,
Feyer V, Plekan O, Prince K C, Zhang W and Carravetta V 2010 J. Chem.
Phys. 133 034302 ISSN 0021-9606, 1089-7690
sundberg_2012_chemistry
Sundberg R 2012 The chemistry of indoles vol 18 (Elsevier)
gribble_2016_indole
Gribble G W 2016 Indole ring synthesis: From natural products to drug
discovery (John Wiley & Sons)
jia_current_2020
Jia Y, Wen X, Gong Y and Wang X 2020 Eur. J. Med. Chem. 200
112359 ISSN 02235234
dorababu_indole_2020
Dorababu A 2020 RSC Med. Chem. 11 1335–1353 ISSN 2632-8682
han_importance_2020
Han Y, Dong W, Guo Q, Li X and Huang L 2020 Eur. J. Med. Chem. 203 112506 ISSN 02235234
du_theoretical_2022
Du X, Wang S Y, Wei M, Zhang J R, Ge G and Hua W 2022 Phys. Chem. Chem.
Phys. 24 8196–8207 ISSN 1463-9076, 1463-9084
hua_systematic_2010
Hua W, Yamane H, Gao B, Jiang J, Li S, Kato H S, Kawai M, Hatsui T, Luo Y,
Kosugi N and Ågren H 2010 J. Phys. Chem. B 114 7016–7021
ISSN 1520-6106, 1520-5207
zhang_accurate_2019
Zhang J R, Ma Y, Wang S Y, Ding J, Gao B, Kan E and Hua W 2019 Phys. Chem.
Chem. Phys. 21 22819–22830 ISSN 1463-9076, 1463-9084
liljefors_2002_textbook
Liljefors T, Krogsgaard-Larsen P and Madsen U 2002 Textbook of drug design
and discovery (CRC Press)
ebrahimi_new_2013
Ebrahimi H, Hadi J and Al-Ansari H 2013 J. Mol. Struct. 1039
37–45 ISSN 00222860
meanwell_inhibitors_2018
Meanwell N A, Krystal M R, Nowicka-Sans B, Langley D R, Conlon D A, Eastgate
M D, Grasela D M, Timmins P, Wang T and Kadow J F 2018 J. Med. Chem.
61 62–80 ISSN 0022-2623, 1520-4804
lee_azaindolylsulfonamides_2014
Lee H Y, Tsai A C, Chen M C, Shen P J, Cheng Y C, Kuo C C, Pan S L, Liu Y M,
Liu J F, Yeh T K, Wang J C, Chang C Y, Chang J Y and Liou J P 2014 J.
Med. Chem. 57 4009–4022 ISSN 0022-2623, 1520-4804
walker_variolins_2009
Walker S R, Carter E J, Huff B C and Morris J C 2009 Chem. Rev. 109 3080–3098 ISSN 0009-2665, 1520-6890
plekan_experimental_2020
Plekan O, Sa’adeh H, Ciavardini A, Callegari C, Cautero G, Dri
C, Di Fraia M, Prince K C, Richter R, Sergo R, Stebel L, Devetta M,
Faccialà D, Vozzi C, Avaldi L, Bolognesi P, Castrovilli M C, Catone D,
Coreno M, Zuccaro F, Bernes E, Fronzoni G, Toffoli D and Ponzi A 2020 J.
Phys. Chem. A 124 4115–4127 ISSN 1089-5639, 1520-5215 and Fig. 4
therein, with the experimental N1s XPS spectra of indole and 3-formylindole
in the gas phase, have been recaptured in our Fig. 4
zhang_electronic_2009
Zhang W, Carravetta V, Plekan O, Feyer V, Richter R, Coreno M and Prince K C
2009 J. Chem. Phys. 131 035103 ISSN 0021-9606, 1089-7690 and
Fig. 4 therein, with the experimental N1s XPS spectrum of 3-methylindole in
the gas phase, has been recaptured in our Fig. 4
he_2022_specific
He L, Malerz S, Trinter F, Trippel S, Tomaník L, Belina M, Slavíček P,
Winter B and Küpper J 2022 Specific versus non-specific solvent interactions
of a biomolecule in water (Preprint 2205.08217)
wang_inner-shell_2008
Wang F, Zhu Q and Ivanova E P 2008 J. Synchrotron. Rad. 15
624–631 ISSN 0909-0495
schmidt_general_1993
Schmidt M W, Baldridge K K, Boatz J A, Elbert S T, Gordon M S, Jensen J H,
Koseki S, Matsunaga N, Nguyen K A, Su S, Windus T L, Dupuis M and Montgomery
J A 1993 J. Comput. Chem. 14 1347–1363
gordon_advances_2005
Gordon M S and Schmidt M W 2005 Advances in electronic structure theory:
GAMESS a decade later Theory and applications of computational
chemistry (Elsevier) pp 1167–1189
becke_density-functional_1988
Becke A D 1988 Phys. Rev. A 38 3098–3100 ISSN 0556-2791
becke_new_1993
Becke A D 1993 J. Chem. Phys. 98 1372–1377 ISSN 0021-9606,
1089-7690
lee_development_1988
Lee C, Yang W and Parr R G 1988 Phys. Rev. B 37 785–789 ISSN
0163-1829
duschinsky_1937
Duschinsky F 1937 Acta Physicochim. URSS 7 551–566
DynaVib
Tian G, Duan S, Hua W and Luo Y DynaVib, version 1.0
Royal Institute of Technology: Sweden, 2012
dunning_gaussian_1989
Dunning T H 1989 J. Chem. Phys. 90 1007–1023
kendall_electron_1992
Kendall R A, Dunning T H and Harrison R J 1992 J. Chem. Phys. 96
6796–6806
si_indole
See Supplemental Material at [URL will be inserted by publisher] for spectra of
each individual molecule with analyses and visualization of the Duschinsky
matrices.
bagus_consequences_2016
Bagus P S, Sousa C and Illas F 2016 J. Chem. Phys. 145 144303
pueyo_bellafont_validation_2015
Bellafont N P, Illas F and Bagus P S 2015 Phys. Chem. Chem. Phys. 17 4015–4019
pueyo_bellafont_prediction_2015
Bellafont N P, Bagus P S and Illas F 2015 J. Chem. Phys. 142
214102
pueyo_SCF_performance_2016
Pueyo Bellafont N, Viñes F and Illas F 2016 J. Chem. Theory Comput.
12 324–331 ISSN 1549-9618
hebestreit_structures_2019
Hebestreit M L, Schneider M, Lartian H, Betz V, Heinrich M, Lindic M, Choi M Y
and Schmitt M 2019 Phys. Chem. Chem. Phys. 21 14766–14774 ISSN
1463-9076, 1463-9084
henrichs_excited_2021
Henrichs C, Reineke M, Hebestreit M L and Schmitt M 2021 J. Mol.
Struct. 1223 129241 ISSN 00222860
peng_vibration_2010
Peng Q, Niu Y, Deng C and Shuai Z 2010 Chem. Phys. 370 215–222
ISSN 03010104
biczysko_first_2009
Biczysko M, Bloino J and Barone V 2009 Chem. Phys. Lett. 471
143–147 ISSN 00092614
hua_study_2016
Hua W, Bennett K, Zhang Y, Luo Y and Mukamel S 2016 Chem. Sci. 7
5922
macak_luo_lcm_2000
Macak P, Luo Y and Ågren H 2000 Chem. Phys. Lett. 330
447–456
zhangyu_nonlinear_2016
Zhang Y, Hua W, Bennett K and Mukamel S 2016 Top. Curr. Chem. 368
273–345
|
http://arxiv.org/abs/2307.02355v1
|
20230705151253
|
Interacting quintessence cosmology from Noether symmetries: comparing theoretical predictions with observational data
|
[
"Ester Piedipalumbo",
"Stefano Vignolo",
"Pasquale Feola",
"Salvatore Capozziello"
] |
gr-qc
|
[
"gr-qc",
"astro-ph.CO",
"hep-th"
] |
sort compress
Physics of the Dark Universe
|
http://arxiv.org/abs/2307.02772v1
|
20230706044824
|
Efficiency of Self-Adjusting Heaps
|
[
"Corwin Sinnamon",
"Robert E. Tarjan"
] |
cs.DS
|
[
"cs.DS"
] |
images/
|
http://arxiv.org/abs/2307.02286v1
|
20230705134025
|
Subleading Effects in Soft-Gluon Emission at One-Loop in Massless QCD
|
[
"Michał Czakon",
"Felix Eschment",
"Tom Schellenberger"
] |
hep-ph
|
[
"hep-ph",
"hep-th"
] |
Focusing on what to decode and what to train: Efficient Training with HOI Split Decoders and Specific Target Guided DeNoising
Junwen Chen Yingcheng Wang Keiji Yanai
Department of Informatics, The University of Electro-Communications, Tokyo, Japan
[email protected], [email protected], [email protected]
August 1, 2023
===========================================================================================================================================================================================================
§ INTRODUCTION
Soft radiation is an important topic in the context of gauge theories. In the abelian case of QED, soft photons are physical and complicate the definition of the scattering operator. In the non-abelian case, in particular in QCD, gauge bosons are not physical in the confining phase, and the presence of a mass gap protects from soft singularities. However, in the context of factorisation, which allows to obtain cross sections as a convolution of a non-perturbative contribution and a contribution that involves massless partons, the problem appears again. In either case, abelian and non-abelian, it is necessary to have a complete description of the leading singular soft asymptotics in order to obtain meaningful theoretical predictions for scattering and decay processes. This problem has been studied since the early days of Quantum Field Theory and is nowadays textbook material. While the subleading behaviour of scattering amplitudes in the soft limit is not necessary to obtain finite cross sections, it is nevertheless of interest due to the ever increasing precision of measurements at lepton and hadron colliders. First attempts at a general description in QED date back to the seminal works of Low <cit.>, Burnett and Kroll <cit.>. Later, it was understood by Del Duca <cit.> that the description cannot be complete beyond tree-level without taking into account collinear virtual states.
Recently, there has been a surge of interest in next-to-leading power (subleading) soft phenomena within resummation formalisms based on Soft-Collinear Effective Theory <cit.> and diagrammatic approaches to QCD <cit.>. The main goal of the studies was the inclusion of subleading effects in the description of simple processes with a minimal number of partons, for example the Drell-Yan process. Even in this case, there were surprises and some assumptions on the structure of the soft expansion turned out to be wrong. For instance, the analysis of Ref. <cit.> that introduced collinear radiation into the picture, was shown to be incomplete.
A different motivation for studying subleading soft effects in QED with massive fermions guided Refs. <cit.>. Here, the idea was to use soft approximations of squared matrix elements to obtain numerically stable predictions for lepton scattering with account of soft photons and light leptons.
Our goal in the present publication is to understand the structure of the next-to-leading-power soft expansion at the one-loop level in QCD. On the one hand, the general expression that we derive allows to put resummation formalisms for multi-parton processes on a firm footing. On other hand, this expression can be used to improve the numerical stability of matrix elements in software implementations.
In our analysis, we stress not only the importance of the Ward identity for the soft gluon – as did the pioneers – but also of gauge-invariance of the occurring amplitudes. This leads to astonishingly simple expressions for the building blocks of the expansion: soft and jet operators. The cancellations that we observe remove contributions that are expected to be present based on pure power-counting arguments, for example transverse-momentum derivatives of amplitudes in the collinear limit, see Refs. <cit.>. Furthermore, we put special emphasis on a deep understanding of the collinear asymptotics. As a side effect, we obtain a novel formula for the next-to-leading power expansion of tree-level amplitudes in the collinear limit.
The publication is organised as follows. In the next section we define the main concepts and recall the colour/spin-space formalism that proves to be very useful in the present context. We define spin-space operators that encapsulate all spin effects at next-to-leading power. We also take great care to define the kinematics of the soft limit to the level of detail required in a numerical application. In Section <ref>, we reproduce the Low-Burnett-Kroll result for QCD, and summarise its features that have been understood in previous studies. We use, nevertheless, our original notation that will prove its power at the one-loop level. In Section <ref>, we state our main result, present a complete proof, and describe numerical tests. Finally, in Section <ref>, we state our result for the next-to-leading-power collinear asymptotics. An outlook section closes the text and discusses some obvious further directions of research.
§ DEFINITIONS
§.§ Processes and amplitudes
Consider the process:
0 → a_1(p_1 + δ_1, σ_1, c_1) + … + a_n(p_n + δ_n, σ_n, c_n) + g(q, σ_n+1, c_n+1) , a_i ∈{q, q̅, g} .
The momenta p_i + δ_i of the hard partons are defined as outgoing, and may thus have negative energy components if the respective parton is actually incoming in the physical process under consideration. The soft gluon with momentum q is outgoing, q^0 > 0. The momenta are assumed on-shell:
p_i^2 = (p_i + δ_i)^2 = m_i^2 , q^2 = 0 ,
where m_i is the mass of parton i. They are required to satisfy the momentum conservation constraints:
∑_i p_i = 0 , ∑_i δ_i + q = 0 .
Notice that Eqs. (<ref>) and (<ref>) are more restrictive than necessary for a physical process. The additional constraints are used to define the soft limit. Contrary to the hard momenta, p_i, every component of the momentum shifts, δ_i, and every component of the soft-gluon momentum is assumed to be of the order of the soft-expansion parameter λ:
p_i^μ = 1 = λ^0≫λ , δ_i^μ = λ , q^μ = λ .
Finally, p_i and q are assumed well separated in angular distance. It follows from Eqs. (<ref>) and (<ref>) that p_i is orthogonal to δ_i in first approximation:
p_i ·δ_i = λ^2 .
The polarisation and colour state of each parton is denoted by σ_i and c_i respectively. The polarisation of massive partons may be defined as rest-frame spin, whereas that of massless partons corresponds to helicity.
The results of this publication are equally valid in the case of quarks of different flavours as well as in the presence of colour-neutral particles, as long as flavour and colour summations have been appropriately adapted.
A scattering amplitude, M_fi, is defined through the decomposition of the scattering matrix S_fi:
S_fi = δ_fi - i (2π)^4 δ^(4)(p_f - p_i) M_fi ,
where i and f stand for initial and final state, and p_i and p_f for their respective momenta. Eq. (<ref>) unambiguously defines the sign of M_fi, which is necessary in the context of our study. For instance, Eqs. (<ref>) and (<ref>) contain products of amplitudes.
The scattering amplitude, M_g({p_i+δ_i},q,{σ_i},{c_i},g^B_s), for the process (<ref>) is given by an expansion in the bare strong coupling constant g^B_s:
M_g ≡( g_s^B )^n-1[ M_g^(0) + μ^-2ϵα^B_s/(4π)^1-ϵ M_g^(1) + ( α_s^B )^2] , α^B_s ≡(g^B_s)^2/4π ,
where ϵ is the parameter of dimensional regularisation with space-time dimension d ≡ 4 - 2ϵ. Although we work with bare quantities, we have introduced the parameter μ with unit mass dimension in order to retain the four-dimensional mass dimension of the amplitudes. In what follows, we allow for massive quarks at tree level. Hence, M^(0)_g may depend on m_i ≠ 0. On the other hand, the soft expansion of the one-loop amplitude M^(1)_g is only provided in the massless case. The definition of M_g is completed once we assume that the external states are four-dimensional, which corresponds to the `t Hooft-Veltman scheme within the family of dimensional-regularisation schemes.
Finally, the expansion of the reduced scattering amplitude, M({p_i},{σ_i},{c_i},g^B_s), for the process obtained from (<ref>) by removing the soft gluon and setting the momentum shifts to zero, is given by:
M ≡( g_s^B )^n-2[ M^(0) + μ^-2ϵα^B_s/(4π)^1-ϵ M^(1) + ( α_s^B )^2] .
§.§ Colour/spin-space formalism
The soft expansion of Sections <ref> and <ref> requires manipulation of the colour state of the hard partons already at 1/λ. Furthermore, subleading effects at order λ^0 require the manipulation of the polarisation state of the hard partons. The formulae are simplified by the use of the colour/spin-space formalism introduced in Ref. <cit.>. This formalism relies on abstract basis vectors:
|c_1,…,c_m;σ_1,…,σ_m⟩≡|c_1,…,c_m⟩⊗|σ_1,…,σ_m⟩ ,
with either m=n+1 or m=n in the present case. Accordingly, we define[The μ dependence for l > 0 is implicit.]:
|M_g^(l)({p_i+δ_i},q)⟩≡∑_{σ_i}∑_{c_i} M_g^(l)({p_i+δ_i},q,{σ_i} {c_i}) |c_1,…,c_n+1;σ_1,…,σ_n+1⟩ ,
and similarly for the reduced scattering amplitude:
|M^(l)({p_i})⟩≡∑_{σ_i}∑_{c_i} M^(l)({p_i},{σ_i},{c_i}) |c_1,…,c_n;σ_1,…,σ_n⟩ .
The soft expansion at one-loop order, Eq. (<ref>), involves flavour off-diagonal contributions that are identified by a replacement of a pair of partons, i and j, in the reduced scattering amplitude w.r.t. to the original process (<ref>). The replacement does not affect the momenta of the partons. Since we do not introduce a flavour/colour/spin-space in the present publication, the respective reduced amplitude will be denoted by:
|M^(l)({p_i}) |a_i →ã_i
a_j →ã_j⟩ .
In order to select amplitudes with a definite polarisation and colour of parton i, we define the following surjection operator:
𝐏_i(σ,c) |…,c_i-1,c_i,c_i+1,…;…,σ_i-1,σ_i,σ_i+1,…⟩≡δ_σσ_iδ_cc_i|…,c_i-1,c_i+1,…;…,σ_i-1,σ_i+1,…⟩ ,
and its specialisation:
𝐏_g(σ,c) ≡𝐏_n+1(σ,c) .
Furthermore, we define an operator that exchanges the quantum numbers of i and j:
𝐄_i,j|…,c_i,…,c_j,…;…,σ_i,…,σ_j,…⟩≡|…,c_j,…,c_i,…;…,σ_j,…,σ_i,…⟩ .
§.§ Colour operators
The leading term of the soft expansion is expressed in terms of colour-space operators 𝐓_i^c:
𝐓_i^c |…,c_i',…⟩≡∑_c_i
T^c_a_i,c_ic_i'|…,c_i,…⟩ ,
T^c_g,ab = if^acb , T^c_q,ab = T^c_ab , T^c_q̅,ab = -T^c_ba .
The structure constants f^abc are defined by 𝐓^a_i𝐓^b_j = i f^abc 𝐓^c_i δ_ij, while the fundamental-representation generators, T^c_ab, are normalised with (T^a T^b) = T_F δ^ab.
§.§ Spin operators
The subleading term of the soft expansion is expressed in terms of spin-space operators 𝐊^μν_i:
𝐊_i^μν|…,σ_i',…⟩≡∑_σ_i K^μν_a_i, σ_iσ_i'(p_i) |…,σ_i,…⟩ ,
with matrices K^μν_a, σσ' that are anti-symmetric in μ,ν and hermitian in σ,σ':
K^μν_a, σσ' = - K^νμ_a, σσ' , K^μν *_a, σσ' = K^μν_a, σ'σ .
For p^0 > 0, i.e. for outgoing quarks, anti-quarks and gluons, these matrices are uniquely defined by[These relations are a consequence of the Lorentz transformation properties of free fields, see for example Section 5.1 of Ref. <cit.>.]:
∑_σ' K^μν_q,σσ'(p) u̅(p,σ') ≡ J^μν(p) u̅(p,σ) - 1/2u̅(p,σ) σ^μν , σ^μν≡i/2γ^μγ^ν ,
∑_σ' K^μν_q̅,σσ'(p) v(p,σ') ≡( J^μν(p) + 1/2σ^μν) v(p,σ) ,
∑_σ' K^μν_g,σσ'(p) ϵ^*_α(p,σ') ≡( J^μν(p) g_αβ + i ( δ^μ_αδ^ν_β - δ^ν_αδ^μ_β) ) ϵ^β *(p,σ) + terms proportional to p_α ,
where J^μν(p) is the generator of Lorentz transformations for scalar functions of p:
J^μν(p) ≡ i ( p^μ∂_p^ν - p^ν∂_p^μ) , ∂_p^μ≡p_μ .
Later, we will mostly use the shorthand notations:
J_i^μν≡ J^μν(p_i) , ∂_i^μ≡∂_p_i^μ .
Definitions (<ref>) may be rewritten in terms of bi-spinors and polarisation vectors of incoming partons:
∑_σ' K^μν ∗_q̅,σσ'(p) u(p,σ') = - ( J^μν(p) + 1/2σ^μν) u(p,σ) ,
∑_σ' K^μν ∗_q,σσ'(p) v̅(p,σ') = - ( J^μν(p) v̅(p,σ) - 1/2v̅(p,σ) σ^μν) ,
∑_σ' K^μν ∗_g,σσ'(p) ϵ_α(p,σ') = - ( J^μν(p) g_αβ + i ( δ^μ_αδ^ν_β - δ^ν_αδ^μ_β) ) ϵ^β(p,σ) + terms proportional to p_α .
Due to our process definition (<ref>), negative-energy momenta imply incoming partons. Hence, we define:
K^μν_a,σσ'(p) ≡ - K^μν ∗_a̅,σσ'(-p) = - K^μν_a̅,σ'σ(-p) for p^0 < 0 .
Since v(p,σ) = Cu̅^T(p,σ), with C the charge conjugation matrix, there is:
K^μν_q̅,σσ'(p) = K^μν_q,σσ'(p) .
This relation is consistent with the fact that spin and helicity have the same definition for particles and anti-particles.
For a massive-quark bi-spinor, with spin defined in the rest-frame along the third axis, transformed with a pure boost to reach momentum p from p_0^μ≡ (m,0), there is:
K^μν_q,σσ' = ϵ^μνα i ( p + p_0
)_α/( p + p_0 )^0 τ^i_σσ'/2 ,
where τ^i_σσ', i = 1,2,3 are the three Pauli matrices.
For massless partons, helicity conservation implies that K^μν_a,σσ' is proportional to δ_σσ'. Assuming that bi-spinors and polarisation vectors for the two helicities are related by a momentum-independent anti-linear transformation, there is:
K^μν_a,σσ' = σ δ_σσ' K^μν .
Furthermore, it follows from the definitions (<ref>) that p_μ K^μν_a,σσ' = 0 for p^2 = 0. Hence:
K^μν = ϵ^μναβ p_α r_β , ϵ_0123≡ +1 ,
for some r that we assume to be lightlike[If r^2 ≠ 0, then the replacement r → r' ≡ r - r^2 p/2 r· p does not change Eq. (<ref>), while r'^2 = 0.]. In particular, if massless bi-spinors are defined along the third axis and then rotated in the direction of p≡ E (sin(θ) cos(φ), sin(θ) sin(φ), cos(θ) ) = E R_z(φ) R_y(θ) ẑ with the composition of rotations R_z(φ) R_y(θ) R_z(-φ), then:
K^μν(p) = ϵ^μναβ p_αp̅_0β/p ·p̅_0 , p̅_0^μ≡ (E,0,0,-E) .
This result is also valid for polarisation vectors defined in the spinor-helicity formalism using the same bi-spinors:
ϵ^*_μ(p,± 1) ≡±p±γ_μk±/√(2)k∓p±≡±u̅(p,±12) γ_μ u(k,±12)/√(2) u̅(k,∓12) u(p,±12) ,
with an arbitrary lightlike reference vector k. If either the massless bi-spinors or the polarisation vectors include an additional phase factor, e.g. ϵ^'∗(p,+1) ≡exp(i ϕ(p)) ϵ^*(p,+1), then K^μν is modified as follows:
K^' μν = K^μν + i J^μνϕ(p) .
With the spinor-helicity-formalism polaristion vectors, there is:
ϵ_μ(p,+1) ϵ^*_ν(p,+1) iK^μν = 1 .
However, because of (<ref>), this result is valid in general. Contractions with K^μν can be efficiently evaluated with the help of:
iK_μν = ∑_σsgn(σ) ϵ^*_μ(p,σ) ϵ_ν(p,σ) ,
with the polarisation vectors (<ref>) assuming k = r and r as in Eq. (<ref>).
Besides the spin operator 𝐊_i^μν, our results involve a simpler spin-dependent operator that gives the sign of the product of the helicities of parton i and gluon n+1:
Σ_g,i|…,σ_i,…,σ⟩≡sgn(σσ_i) |…,σ_i,…,σ⟩ .
§.§ Splitting operators
The soft expansion at one-loop order requires the collinear expansion of tree-level amplitudes. The latter expansion is expressed in terms of splitting operators that act non-trivially in both colour and spin space. The splitting operators are defined as follows:
c_1,c_2;σ_1,σ_2𝐒𝐩𝐥𝐢𝐭^(0)_q g ← q(k_1,k_2,k)c;σ = -1/2 k_1 · k_2 T^c_2_c_1 c u̅(k_1,σ_1) ϵ^*(k_2,σ_2) u(k,σ) ,
c_1,c_2;σ_1,σ_2𝐒𝐩𝐥𝐢𝐭^(0)_q̅ g ← q̅(k_1,k_2,k)c;σ = + 1/2 k_1 · k_2 T^c_2_c c_1 v̅(k,σ) ϵ^*(k_2,σ_2) v(k_1,σ_1) ,
c_1,c_2;σ_1,σ_2𝐒𝐩𝐥𝐢𝐭^(0)_qq̅ ← g(k_1,k_2,k)c;σ = -1/2 k_1 · k_2 T^c_c_1 c_2 u̅(k_1,σ_1) ϵ(k,σ) v(k_2,σ_2) ,
c_1,c_2;σ_1,σ_2𝐒𝐩𝐥𝐢𝐭^(0)_gg ← g(k_1,k_2,k)c;σ = - 1/2 k_1 · k_2 i f^c_1cc_2
×( + (k_1+k) ·ϵ^*(k_2,σ_2) ϵ^*(k_1,σ_1) ·ϵ(k,σ)
- (k_2+k) ·ϵ^*(k_1,σ_1) ϵ^*(k_2,σ_2) ·ϵ(k,σ)
+ (k_2-k_1) ·ϵ(k,σ) ϵ^*(k_1,σ_1) ·ϵ^*(k_2,σ_2) ) .
In order to simplify the notation, for example in (<ref>) and (<ref>), we also define the following operator:
𝐒𝐩𝐥𝐢𝐭^(0)_i,n+1 ← i(p_i,p_n+1,p_i') |…,c_i',…;…,σ_i',…⟩ =
∑_σ_ic_i∑_σ_n+1 c_n+1c_i,c_n+1;σ_i,σ_n+1𝐒𝐩𝐥𝐢𝐭^(0)_a_i a_n+1 ← a_i'(p_i,p_n+1,p_i')c_i';σ_i'
×|…,c_i,…,c_n+1;…,σ_i,…,σ_n+1⟩ ,
where a_i' is parton i corresponding to the ket on the left-hand side, and a_i, a_j are partons i,j corresponding to the ket on the right-hand side of (<ref>). In general a_i' ≠ a_i, as for example in (<ref>).
§ LOW-BURNETT-KROLL THEOREM FOR TREE-LEVEL QCD
The leading and subleading term of the soft expansion, i.e. expansion in λ, of the tree-level amplitude |M_g^(0)({p_i+δ_i},q,σ,c)⟩ are given by the QCD generalisation <cit.> of the Low-Burnett-Kroll (LBK) theorem
<cit.> originally proven for QED[The sign in Eq. (<ref>) is a consequence of our convention for the strong coupling constant: we assume that the quark-gluon interaction term in the Lagrangian is +g^B q̅A^a T^a q.]:
|M_g^(0)({p_i+δ_i},q)⟩ = 𝐒^(0)({p_i},{δ_i},q) |M^(0)({p_i})⟩ + λ ,
𝐏_g(σ,c) 𝐒^(0)({p_i},{δ_i},q) = - ∑_i 𝐓_i^c ⊗𝐒^(0)_i(p_i,δ_i,q,σ) |M^(0)({p_i})⟩ ,
𝐒^(0)_i = p_i ·ϵ^*/p_i · q + 1/p_i · q[ ( ϵ^* - p_i ·ϵ^*/p_i · q q ) ·δ_i + p_i ·ϵ^* ∑_jδ_j ·∂_j + 1/2 F_μν( J^μν_i - 𝐊^μν_i ) ] ,
with:
qσ cA^a_μ(0)0 = δ^caϵ^*(q,σ) ,
qσ cF^a_μν(0)0 = δ^ca i ( q_μϵ^*_ν(q,σ) - q_νϵ^*_μ(q,σ) ) ≡δ^ca F_μν(q,σ) ,
where A^a_μ(x) and F^a_μν(x) are the gluon field and the respective field-strength tensor, while |qσ c⟩ is a single-gluon state with momentum q, polarisation σ and colour c.
§.§ Derivation and constraints
Most of the terms in Eq. (<ref>) are obtained by extending the eikonal approximation to one order higher in λ. Indeed, consider the diagram of Fig. <ref>. The leading term as well as the first term in the square bracket of Eq. (<ref>) are due to the expansion of the eikonal approximation taken with the original momentum, p_i + δ_i, of the hard-parton, i.e. outgoing quark in Fig. <ref>:
(p_i + δ_i) ·ϵ^*/(p_i + δ_i) · q = p_i ·ϵ^*/p_i · q + 1/p_i · q( ϵ^* - p_i ·ϵ^*/p_i · q q ) ·δ_i + λ .
The second term in the square bracket in Eq. (<ref>) is due to the expansion of the reduced scattering amplitude represented by the shaded circle in Fig. <ref> in δ_j, j = 1,…,n. The additional expansion of this amplitude in q is taken into account by the first term on the right-hand side of:
1/2 p_i · q F_μν J_i^μν = p_i ·ϵ^*/p_i · q q ·∂_i - ϵ^* ·∂_i .
The classic LBK argument that generates the second term on the right-hand side of the above equation from the first term on the right-hand side, consists in requiring the soft expansion to fulfil the (QED) Ward identity, i.e. transversality of the amplitude with respect to the soft-gluon momentum. This accounts for emissions from the internal off-shell lines, i.e. diagrams that do not have the structure of Fig. <ref>.
While spin effects can be obtained by explicit calculation of the expression for Fig. <ref> and similarly for anti-quarks and gluons, there is a simpler argument that allows to understand the result. From Fig. <ref>, we conclude that the external wave function, i.e. bi-spinor for quarks and anti-quarks or polarisation vector for gluons, does not depend on q. Hence, the differential operator q ·∂_i in Eq. (<ref>) should not act on it. We thus have to subtract the action of J^μν on the external wave function. The result should, however, still contain a gauge-invariant amplitude with n hard partons. Thus, the subtracted term can be at most a linear combination of amplitudes with different polarisations of the hard parton a_i, which leads to the replacement of J_i^μν by J_i^μν - 𝐊_i^μν in Eq. (<ref>). The latter difference does not contain any derivatives when acting on external wave functions according to Eqs. (<ref>). This argument has the virtue of applying at higher orders as well. In consequence, the one-loop expression for the soft operator in Eq. (<ref>) also only contains the combination J_i^μν - 𝐊_i^μν.
The soft expansion (<ref>) is strongly constrained by Lorentz covariance and gauge invariance (Ward identity) as has been discussed in great detail previously in Ref. <cit.>, albeit only in the case of pure gluon amplitudes. Here, we would like to stress once more that the process-dependent input on the r.h.s. of Eq. (<ref>), i.e. the amplitude |M^(0)({p_i})⟩, is gauge invariant on its own. This is not a trivial fact, since it does not naively apply in high-energy factorization for example, see Ref. <cit.> and references therein. In the present case, the issue of gauge invariance is entangled with the issue of defining momentum derivatives in Eq. (<ref>). Indeed, the amplitude |M^(0)({p_i})⟩ must be on-shell, and it thus only depends on the spatial components of the momentum vectors. The momentum derivatives in Eq. (<ref>), on the other hand, also involve the energy component. Fortunately, Eq. (<ref>), hence also Eq. (<ref>), is consistent with on-shellness since:
( ∑_j δ_j ·∂_j ) p_i^2 = 2 δ_i · p_i = 0 , J_i^μν p_i^2 = 0 ,
where we have used Eq. (<ref>) and neglected terms of higher order in λ. An additional difficulty arises from the fact that Eq. (<ref>) involves derivatives in all of the momenta p_i, whereas the amplitude |M^(0)({p_i})⟩ is only a function of n-1 of them due to momentum conservation. Since extension of |M^(0)({p_i})⟩ away from momentum conservation is not unique, Eq. (<ref>) must be consistent with momentum conservation. This is indeed the case, albeit colour-conservation is required for the proof:
[ 𝐏_g(σ,c) 𝐒^(0)({p_i},{δ_i},q) ]_momentum
derivatives |f(P)⟩ = ( ϵ^* ·P) ∑_i 𝐓_i^c |f(P)⟩ = 0 , P ≡∑_i p_i ,
where |f(P)⟩ is invariant with respect to global gauge transformations and depends on the sum of the momenta only. The importance of this result lies in the fact that the result for the soft expansion in Eq. (<ref>) remains the same even if we eliminate one of the p_i momenta in |M^(0)({p_i})⟩ by momentum conservation. In fact, one can eliminate different p_i's in different diagrams that contribute to |M^(0)({p_i})⟩ without affecting the final result.
§.§ Squared amplitudes
While the focus of this publication lies on amplitudes, we would like to point out the simplifications that occur in the case of squared amplitudes summed over spin and colour. The first simplification is the lack of spin effects already noted in Ref. <cit.>. Indeed, squaring Eq. (<ref>) and keeping only terms up to 1/λ, leaves the following contribution containing spin operators:
-i ∑_ijp_i^μ q^ν/p_i · q𝐓_i ·𝐓_j ⊗( 𝐊_i,μν - 𝐊^†_i,μν)M^(0) = 0 .
This contribution vanishes because of the hermiticity, (<ref>), of the spin operators. The second simplification is the possibility <cit.> to include subleading soft effects through momentum shifts as follows:
⟨M^(0)_g({k_l},q)|=⟩ - ∑_i ≠ j( k_i · k_j/(k_i · q)(k_j · q) - m_i^2 /2 ( k_i · q )^2 - m_j^2/2 ( k_j · q )^2)
×𝐓_i ·𝐓_jM^(0)({k_l + δ_ilΔ_i + δ_jlΔ_j}) + λ^0 ,
with:
k_i ≡ p_i + δ_i ,
Δ_i ≡1/N_ij[ ( 1 - m_i^2 ( p_j · q )/( p_j · p_i ) ( p_i · q )) q + p_j · q/p_j · p_i p_i - p_i · q/p_i · p_j p_j ] ,
Δ_j ≡1/N_ij[ ( 1 - m_j^2 ( p_i · q )/( p_i · p_j ) ( p_j · q )) q - p_j · q/p_i · p_j p_i + p_i · q/p_i · p_j p_j ] ,
N_ij ≡ 2 - m_i^2 ( p_j · q )/( p_j · p_i ) ( p_i · q ) - m_j^2 ( p_i · q )/( p_i · p_j ) ( p_j · q ) .
Notice that the momenta in the reduced scattering amplitude in Eq. (<ref>) satisfy momentum conservation and are on-shell up to λ:
∑_l k_l + δ_ilΔ_i + δ_jlΔ_j = 0 , ( k_l + δ_ilΔ_i + δ_jlΔ_j )^2 = m_l^2 + λ^2 .
In fact, it is possible to add corrections of λ^2 to these momenta to make them exactly on-shell.
§ SOFT EXPANSION OF MASSLESS ONE-LOOP QCD AMPLITUDES
§.§ Theorem
The main result of this publication is the following next-to-leading-power-accurate soft expansion of a one-loop massless-QCD amplitude:
| M_g^(1)({p_i+δ_i},q) ⟩ = 𝐒^(0)({p_i},{δ_i},q) |M^(1)({p_i})⟩
+ 𝐒^(1)({p_i},{δ_i},q) |M^(0)({p_i})⟩ + ∫_0^1 x∑_i 𝐉_i^(1)(x,p_i,q) |H^(0)_g,i(x,{p_i},q)⟩
+ ∑_i ≠ j∑_ã_i ≠ a_i
ã_j ≠ a_j𝐒̃^(1)_a_i a_j ← ã_i ã_j, ij(p_i,p_j,q) |M^(0)({p_i}) |a_i → ã_i
a_j → ã_j⟩ + ∫_0^1 x∑_i
a_i = g𝐉̃_i^(1)(x,p_i,q) |H^(0)_q̅,i(x,{p_i},q)⟩
+ λ .
The soft operator 𝐒^(1)({p_i},{δ_i},q) is an extension of the one-loop soft current, and is given by the expansion through λ^0 of the r.h.s. of:
𝐏_g(σ,c) 𝐒^(1)({p_i},{δ_i},q) + λ = 2 r_Soft/ϵ^2 ∑_i ≠ j i f^abc𝐓_i^a 𝐓_j^b ⊗(- μ^2 s^(δ)_ij/s^(δ)_iq s^(δ)_jq)^ϵ[ S^(0)_i(p_i,δ_i,q,σ)
+ ϵ/1-2ϵ1/p_i · p_j( p_i^μ p_j^ν - p_j^μ p_i^ν/p_i · q + p_j^μ p_j^ν/p_j · q) F_μρ(q,σ) ( J_i - 𝐊_i )^νρ] ,
with:
s^(δ)_ij≡ 2 (p_i + δ_i) · (p_j + δ_j) + i0^+ , s^(δ)_iq≡ 2 (p_i + δ_i) · q + i0^+ , s^(δ)_jq≡ 2 (p_j + δ_j) · q + i0^+ ,
r_Soft≡Γ^3(1 - ϵ) Γ^2(1 + ϵ)/Γ(1 - 2 ϵ) = 1 + ϵ .
For convenience, we have not expanded the factor containing s^(δ)_ij, s^(δ)_iq and s^(δ)_jq. A strict expansion depends on:
s_ij≡ 2 p_i · p_j + i0^+ , s_iq≡ 2 p_i · q + i0^+ , s_jq≡ 2 p_j · q + i0^+ ,
and on the scalar products of δ_i and δ_j with p_i, p_j and q. Finally, we notice that contractions of 𝐊_i^μν with other vectors can be conveniently evaluated with the help of Eq. (<ref>).
The flavour-off-diagonal soft operator is given by:
𝐒̃^(1)_a_i a_j ← ã_i ã_j, ij(p_i,p_j,q) |…,c_i',…,c_j',…;…,σ_i',…,σ_j',…⟩
= - r_Soft/ϵ(1-2ϵ)(- μ^2 s_ij/s_iq s_jq)^ϵ∑_σ c∑_σ_ic_i∑_σ_jc_j∑_σ_i”c_i”∑_σ_j”c_j”
T^c_c_j”c_i” v̅(p_j,σ_j”) ϵ^*(q,p_i,σ) u(p_i,σ_i”) for a_i = q or ã_i = q̅
T^c_c_i”c_j” v̅(p_i,σ_i”) ϵ^*(q,p_i,σ) u(p_j,σ_j”) for a_i = q̅ or ã_i = q
×c_i,c_j”;σ_i,σ_j”𝐒𝐩𝐥𝐢𝐭^(0)_a_i ã̃_j ← ã_i(p_i,p_j,p_i)c_i';σ_i'c_j,c_i”;σ_j,σ_i”𝐒𝐩𝐥𝐢𝐭^(0)_a_j ã̃_i ← ã_j(p_j,p_i,p_j)c_j';σ_j'
×|…,c_i,…,c_j,…,c;…,σ_i,…,σ_j,…,σ⟩ ,
where:
ϵ^*_μ(q,p_i,σ) ≡ϵ^*_μ(q,σ) - p_i ·ϵ^*(q,σ)/p_i · q q_μ = i F_μν(q,σ) p_i^ν/p_i · q , ϵ^*(q,p_i,σ) · q = ϵ^*(q,p_i,σ) · p_i = 0 .
The partons ã̃_i and ã̃_j are uniquely determined by flavour conservation in the splitting processes a_i ã̃_j ← ã_i and a_j ã̃_i ← ã_j. The contribution corresponds to the emission of a soft quark-anti-quark pair, which then produces the soft gluon as depicted in Fig. <ref>. Finally, we notice that due to chirality and angular-momentum conservation, there is:
sgn(σ_i) = sgn(σ_i') = sgn(σ_i”) = - sgn(σ_j”) = -sgn(σ_j') = - sgn(σ_j) .
The jet operator 𝐉_i^(1)(x,p_i,q) is given by:
𝐏_g(σ,c) 𝐉_i^(1)(x,p_i,q) = Γ(1+ϵ)/1-ϵ( - μ^2/s_iq)^ϵ( x(1-x) )^-ϵ∑_σ'c'ϵ^*_μ(q,p_i,σ) ϵ_ν(p_i,σ') 𝐏_g(σ',c')
×[ ( 𝐓^c_i 𝐓^c'_i + 1/x i f^cdc'𝐓^d_i ) ⊗( (x - 2) g^μν + ( 1 + 2(a_i) ) x i 𝐊_i^μν) ]
= Γ(1+ϵ)/1-ϵ( - μ^2/s_iq)^ϵ( x(1-x) )^-ϵϵ^*(q,p_i,σ) ·ϵ(p_i,-σ) 𝐏_g(σ',c')
×∑_c'[ ( 𝐓^c_i 𝐓^c'_i + 1/x i f^cdc'𝐓^d_i ) ⊗( - 2 + x ( 1 + Σ_g,i) ) ] ,
where (a_i) is the mass dimension of the wave function of parton i, (q) = (q̅) = 12 and (g) = 0. The second equality follows from Eqs. (<ref>) and (<ref>):
ϵ^*_μ(q,p_i,σ) ϵ_ν(p_i,σ') iK^μν_a_i,σ_iσ_i'(p_i) = - σδ_-σσ' σ_i δ_σ_iσ_i' ϵ^*(q,p_i,σ) ·ϵ(p_i,-σ) ,
because ϵ^*(q,p_i,σ) has helicity σ as a polarisation vector for q and helicity -σ as a polarisation vector for p_i. This can be proven in the rest-frame of q+p_i, where a clockwise rotation around q is equivalent to an anti-clockwise rotation around p_i.
The jet operator of a gluon, a_i = g, is not symmetric w.r.t. the gluons i and n+1. On the other hand it is given by the same expression as that of the (anti-)quark up to the factor depending on (a_i). In fact, because of Eq. (<ref>), the spin-dependent parts of the (anti-)quark and gluon jet operators are numerically identical. This is not a coincidence, but rather a consequence of a hidden supersymmetry. Indeed, if the quark field transformed with the adjoint representation of the gauge group, then it could belong to the same superfield as the gluon, and the diagrams that enter the calculation of the jet operator for a quark and for a gluon would be related by supersymmetry. The missing symmetry of the gluon jet operator, on the other hand, is restored in the convolution with the symmetric collinear-gluon amplitude (<ref>).
The flavour-off-diagonal jet operator 𝐉̃_i^(1)(x,p_i,q) is given by:
𝐉̃_i^(1)(x,p_i,q) |…,c_i',…,c';…,σ_i',…,σ'⟩ =
Γ(1+ϵ)/1-ϵ( - μ^2/s_iq)^ϵ( x(1-x) )^-ϵ∑_cc_i( T^c_q T^c_i_q + x i f^cdc_i T^d_q )_c'c_i'∑_σσ_iϵ^*_μ(q,p_i,σ) ϵ^*_ν(p_i,σ_i)
×( ( 1 - 2 x ) g^μν 1 + 2 iK_q^μν(p_i) )_-σ'σ_i'|…,c_i,…,c;…,σ_i,…,σ⟩
= Γ(1+ϵ)/1-ϵ( - μ^2/s_iq)^ϵ( x(1-x) )^-ϵ∑_cc_i( T^c_q T^c_i_q + x i f^cdc_i T^d_q )_c'c_i'δ_-σ'σ_i'∑_σσ_iδ_σσ_iϵ^*(q,p_i,σ) ·ϵ^*(p_i,σ_i)
×( - 2 x + 1 + sgn(σ_iσ') ) |…,c_i,…,c;…,σ_i,…,σ⟩ .
The operator transforms a state with a_i = q, a_n+1 = q̅ into a state with a_i = a_n+1 = g. The sign of the r.h.s. of Eq. (<ref>) is a consequence of our convention:
v(p,σ) = -u(p,-σ) ,
see Eq. (<ref>). We point out that there is a crossing-like relation between 𝐉_i and 𝐉̃_i which becomes apparent by comparing the r.h.s. of (<ref>) with x 𝐉_i(1/x,p_i,q) at vanishing ϵ.
The collinear-gluon amplitude |*⟩H^(0)_g,i(x,{p_i},q) is defined as follows for a_i ∈{q,q̅}:
𝐏_g(σ,c) |H^(0)_g,i(x,{p_i},q)⟩≡
(1-x)^-(a_i)𝐏_g(σ,c) |Δ M^(0)_g(x,{p_i},q)⟩ - 1/xq ·ϵ^*(p_i,σ)/q · p_i𝐓_i^c |M^(0)({p_i})⟩ ,
and as follows for a_i = g:
𝐏_i(σ_i,c_i) 𝐏_n+1(σ_n+1,c_n+1) |H^(0)_g,i(x,{p_i},q)⟩≡
(1-x)^-(a_i)𝐏_i(σ_i,c_i) 𝐏_n+1(σ_n+1,c_n+1) |Δ M^(0)_g(x,{p_i},q)⟩
- 1/xq ·ϵ^*(p_i,σ_n+1)/q · p_i𝐏_i(σ_i,c_i) 𝐓_i^c_n+1|M^(0)({p_i})⟩
- 1/1-xq ·ϵ^*(p_i,σ_i)/q · p_i𝐏_i(σ_n+1,c_n+1) 𝐓_i^c_i|M^(0)({p_i})⟩ ,
where:
|Δ M^(0)_g,i(x,{p_i},q)⟩≡lim_l_⊥→ 0[ |M_g^(0)({k_i}_i=1^n,k_g)⟩ - 𝐒𝐩𝐥𝐢𝐭^(0)_i,n+1 ← i(k_i,k_g,p_i) |M^(0)({p_i})⟩] ,
is the subleading term of the expansion of the tree-level soft-gluon emission amplitude in the limit of the soft gluon collinear to parton i as specified by the following configuration:
k_g ≡ x p_i + l_⊥ - l_⊥^2/2xq/p_i · q , with l_⊥· p_i = l_⊥· q = 0 ,
k_i ≡ (1-x) p_i - l_⊥ - l_⊥^2/2(1-x)q/p_i · q , and k_j ≡ p_j + l_⊥^2 , j ≠ i .
For a_i = g, we further require that the gluon polarisation vector in the amplitude for the subtraction term and hence also in the splitting operator in (<ref>) be defined with reference vector q yielding the helicity sum:
∑_σϵ_μ(p_i,σ) ϵ^*_ν(p_i,σ) = - g_μν + p_μ q_ν + p_ν q_μ/p · q .
Without this requirement, the collinear-gluon amplitude depends on the additional reference vector. Notice that the subtraction in (<ref>) removes not only the leading collinear-singular asymptotics, but also part of the regular l_⊥^0 term. The additional term in Eq. (<ref>) w.r.t. (<ref>) is necessary in order to retain symmetry w.r.t. to the exchange of the gluons i and n+1.
The collinear-quark amplitude |*⟩H^(0)_q̅,i(x,{p_i},q) is given by:
|H^(0)_q̅,i(x,{p_i},q)⟩≡
( x (1-x) )^-12lim_l_⊥→ 0[ |M_q̅^(0)({k_i}_i=1^n,k_g) |_a_i → q⟩ - 𝐒𝐩𝐥𝐢𝐭^(0)_i,n+1 ← i(k_i,k_g,p_i) |M^(0)({p_i})⟩] ,
where *c_1,…,c;σ_1,…,σM_q̅^(0)({k_i}_i=1^n,k_g) |_a_i →q̅ is the amplitude for the process:
0 → a_1(k_1, σ_1, c_1) + … + q(k_i, σ_i, c_i) + … + a_n(k_n, σ_n, c_n) + q̅(k_g, σ_n+1, c_n+1) .
If there is more than one massless quark flavour, then the last term in Eq. (<ref>) includes summation over flavours.
The collinear convolutions, i.e. integrals over x, in Eq. (<ref>) are evaluated explicitly in Section <ref>.
§.§ Collinear amplitudes
Although Eq. (<ref>) involves convolutions of jet operators with collinear amplitudes, the x-integrals can be performed analytically which yields an expression in terms of tree-level amplitudes independent of x. In order to derive the relevant formulae, we first list the properties of the collinear amplitudes.
§.§.§ Gauge invariance and Ward identity
By construction, |Δ M^(0)_g,i(x,{p_i},q)⟩ defined in Eq. (<ref>) is gauge invariant, since it only involves gauge invariant amplitudes. However, it does not satisfy the naive Ward identity w.r.t. to the gluon with momentum x p_i. If we denote by s the scalar polarisation, i.e. ϵ^*(p,σ = s) = p, then:
lim_l_⊥→ 0𝐏_g(σ = s, c) [ |M_g^(0)({k_i}_i=1^n,k_g)⟩ - 𝐒𝐩𝐥𝐢𝐭^(0)_i,n+1 ← i(k_i,k_g,p_i) |M^(0)({p_i})⟩] =
(1-x)^(a_i)𝐓_i^c |M^(0)({p_i})⟩ .
The result is entirely due to the second term in the square bracket. It follows that the collinear-gluon amplitudes defined in Eqs. (<ref>) and (<ref>) satisfy the Ward identity:
𝐏_g(σ = s, c) |H^(0)_g,i(x,{p_i},q)⟩ = 0 .
§.§.§ Evaluation for arbitrary x
The limit in the definition (<ref>) can be obtained directly from Feynman diagrams as follows:
𝐏_i(σ_i,c_i) 𝐏_g(σ,c) |Δ M^(0)_g,i(x,{p_i},q)⟩ =
[ 𝐏_i(σ_i,c_i) 𝐏_g(σ,c) |M_g^(0)({p_1,…,(1-x)p_i,…,p_n},xp_i)⟩]_non-singular
diagrams
- δ_σ_i,-s_i σ∑_c_i' T^c_a_i,c_ic_i'[
u̅((1-x)p_i,σ_i) ϵ^*(p_i,σ)q/2 p_i · qu̅_i if a_i = q
qϵ^*(p_i,σ)v((1-x)p_i,σ_i)/2 p_i · qv_i if a_i = q̅
(2x-1) q/p_i · q·ϵ^*_i if a_i = g]
𝐏_i(σ_i,c_i') |M^(0)({p_i})⟩ ,
where s_i = 12 if either a_i = q or a_i = q̅, and s_i = 1 if a_i = g. The derivatives *ψ_i, ψ_i ∈{u̅_i,v_i,ϵ^*_i}, remove the wave function ψ_i of parton i in the amplitude. The collinear-quark amplitude is obtained similarly:
𝐏_i(σ_i,c_i) 𝐏_g(σ,c) |H^(0)_q̅,i(x,{p_i},q)⟩ =
( x (1-x) )^-12 [ 𝐏_i(σ_i,c_i) 𝐏_g(σ,c) |M_q̅^(0)({p_1,…,(1-x)p_i,…,p_n},xp_i)⟩ |_a_i → q]_non-singular
diagrams
- δ_σ_i,-σ∑_c_i' T^c_i'_c_ic 2q/p_i · q·ϵ^*_i 𝐏_i(σ_i,c_i') |M^(0)({p_i})⟩ .
§.§.§ Small-x expansion
𝐏_g(σ,c) |H^(0)_g,i(x,{p_i},q)⟩ = - ∑_j ≠ i𝐓_j^c ⊗[ ( 1/x + (a_i) ) ( p_j ·ϵ_i^*/p_j · p_i - q ·ϵ_i^*/q · p_i)
+ F_i μν/2 p_j · p_i( - i ( p_j^μ∂_i^ν - p_j^ν∂_i^μ) + J_j^μν - 𝐊_j^μν) + i q_μϵ_i ν^*/q · p_i 𝐊_i^μν] |M^(0)({ p_i })⟩ + x ,
where:
ϵ_i^* ≡ϵ^*(p_i,σ) , F_i^μν≡ i ( p_i^μϵ_i^ν * - p_i^νϵ_i^μ *) .
The above result can be obtained similarly to Eq. (<ref>) by extending the eikonal approximation of Eq. (<ref>) for soft-gluon emission from partons j ≠ i with δ_k = -δ_ki x p_i and q = x p_i. Subsequently requiring the Ward identity to be satisfied introduces the term:
- ∑_j ≠ i𝐓_j^c ϵ^*_i · (∂_i - ∂_j) .
Spin effects for partons j ≠ i are restored as discussed in Section <ref>. Finally, contributions due to soft-gluon emission from parton i are given explicitly in Eqs. (<ref>) and (<ref>), while spin effects can be determined from Eq. (<ref>).
§.§.§ Dependence on x
It follows from the definitions Eqs. (<ref>), (<ref>) together with Eq. (<ref>) evaluated in Feynman gauge that the collinear-gluon amplitudes are not only rational in x but can be reduced by partial fractioning to the form:
|H^(0)_g,i(x,{p_i},q)⟩ = ( 1/x + (a_i) ) |S^(0)_g,i({p_i},q)⟩ + |C^(0)_g,i({p_i},q)⟩ + x/1-x|S̅^(0)_g,i({p_i},q)⟩
+ ∑_I ( 1/x_I - x - 1/x_I) |R^(0)_g,i,I({p_i})⟩ + x |L^(0)_g,i({p_i},q)⟩ ,
where the sum in the second line is taken over subsets:
I ⊂{ 1,…,n }∖{i} , 2 ≤ |I| < n-2 ,
with:
x_I ≡ - P_I^2 + i0^+/2 p_i · P_I , P_I ≡∑_j ∈ I p_j .
The soft-pole and constant contributions, |*⟩S^(0)_g,i({p_i},q) and |*⟩C^(0)_g,i({p_i},q), follow from Eq. (<ref>):
𝐏_g(σ,c) |S^(0)_g,i({p_i},q)⟩ = - ∑_j ≠ i𝐓_j^c ( p_j/p_j · p_i - q/q · p_i) ·ϵ^*(p_i,σ) |M^(0)({ p_i })⟩ ,
𝐏_g(σ,c) |C^(0)_g,i({p_i},q)⟩ =
- ∑_j ≠ i𝐓_j^c ⊗( p_iμϵ_ν^*(p_i,σ)/p_j · p_i( p_j^μ∂_i^ν - p_j^ν∂_i^μ + iJ_j^μν - i𝐊_j^μν) + q_μϵ_ν^*(p_i,σ))/q · p_i i𝐊_i^μν) |M^(0)({ p_i })⟩ .
The residue contributions, |*⟩R^(0)_g,i,I({p_i}), correspond to poles[If massive colour-neutral particles, e.g. electroweak gauge bosons, were included in the theory then the value of x_I would have to be modified to include the mass of the intermediate particle.] of internal propagators that carry momentum P_I + x p_i in the first term on the r.h.s. of Eq. (<ref>) as illustrated in Fig. <ref>:
c_1,…,c_n+1;σ_1,…,σ_n+1R^(0)_g,i,I({p_i}) =
( 1 - x_I )^-(a_i) 1/2p_i · P_I∑_σ c M^(0)_I({p_i},{σ_i},{c_i},σ,c) M^(0)_I({p_i},{σ_i},{c_i},σ,c) .
M^(0)_I({p_i},{σ_i},{c_i},σ,c) and M^(0)_I({p_i},{σ_i},{c_i},σ,c) are the tree-level amplitudes for the respective processes:
0 →∑_j ∈ I a_j(p_j,σ_j,c_j) + g(x_I p_i,σ_n+1,c_n+1) + b(- P_I - x_I p_i,σ,c) and
0 →∑_j ∉I
j ≠ i a_j(p_j,σ_j,c_j) + a_i((1-x_I) p_i,σ_i,c_i) + b̅(P_I + x_I p_i,-σ,c) ,
where parton b is determined by flavour conservation, while b̅ is its anti-particle. If the flavour constraint cannot be met, then the contribution for the given I vanishes by definition.
The anti-soft-pole contribution, |*⟩S̅^(0)_g,i({p_i},q), is given by:
|S̅^(0)_g,i({p_i},q)⟩ = 𝐄_i,n+1∑_j ≠ i𝐒𝐩𝐥𝐢𝐭^(0)_j,n+1 ← j(p_j,p_i,p_j) |M^(0)({p_i}) |a_i → g
a_j → ã_j⟩ for a_i ∈{q,q̅} ,
|S^(0)_g,i({p_i},q)⟩ for a_i = g ,
where the splitting operator corresponds to the transition a_ja_i ←ã_j. The result for a_i ∈{q,q̅} is given by (<ref>) in the special case |I| = n-2 where:
lim_x_I → 1…,c_i',c_j';…,σ_i',σ_j'M^(0)_I_j({p_i}) = …,c_i',…,c_j',…;…,σ_i',…,σ_j',…M^(0)({p_i}) |a_i → g
a_j → ã_j ,
lim_x_I → 1( 1 - x_I )^1-(a_i)/( - P_I_j - x_I p_i )^2c_j,c_i,c_j';σ_j,σ_i,σ_j'M^(0)_I_j({p_i}) =
lim_x_I → 1( 1 - x_I )^1-(a_i)c_j,c_i;σ_j,σ_i𝐒𝐩𝐥𝐢𝐭^(0)_a_j a_i ← ã_j(p_j,(1-x_I) p_i,p_j)c_j',σ_j' =
c_j,c_i;σ_j,σ_i𝐒𝐩𝐥𝐢𝐭^(0)_a_j a_i ← ã_j(p_j,p_i,p_j)c_j',σ_j' ,
with:
I_j ≡{ 1,…,n }∖{i,j} , P_I_j = -p_i-p_j , ã_j ≡ b .
In principle, the result for a_i = g can be obtained with the above method as well. However, the second equality in (<ref>) does not apply for a_j = ã_j = g:
lim_x_I → 1( 1 - x_I ) c_j,c_i;σ_j,σ_i𝐒𝐩𝐥𝐢𝐭^(0)_gg ← g(p_j,(1-x_I) p_i,p_j)c_j',σ_j'≠
c_j,c_i;σ_j,σ_i𝐒𝐩𝐥𝐢𝐭^(0)_gg ← g(p_j,p_i,p_j)c_j',σ_j' .
Instead, the three splitting operators (<ref>), (<ref>) and (<ref>) yield eikonal factors. Moreover, in order to obtain the complete anti-soft pole contribution, it is still necessary to include the contribution of the last term in Eq. (<ref>). These difficulties may be overcome by using the symmetry of the collinear-gluon amplitude w.r.t. the exchange of the gluons i and n+1, which straightforwardly yields (<ref>).
Finally, the linear contribution, |*⟩L^(0)_g,i({p_i},q), vanishes for a_i ∈{ q, q̅}, while for a_i = g it is again determined by the symmetry of the collinear-gluon amplitude w.r.t. the exchange of the gluons i and n+1:
|L^(0)_g,i({p_i},q)⟩ = |S̅^(0)_g,i({p_i},q)⟩ - |S^(0)_g,i({p_i},q)⟩ + |C̅^(0)_g,i({p_i},q)⟩ - |C^(0)_g,i({p_i},q)⟩
+ 1/2∑_I ( 1/x_I + 1/1 - x_I) ( |R^(0)_g,i,I({p_i})⟩ - |R̅^(0)_g,i,I({p_i})⟩) ,
where:
|C̅^(0)_g,i({p_i},q)⟩ = 𝐄_i,n+1 |C^(0)_g,i({p_i},q)⟩ , |R̅^(0)_g,i,I({p_i},q)⟩ = 𝐄_i,n+1 |R^(0)_g,i,I({p_i},q)⟩ .
The x-dependence of the collinear-quark amplitude is given by:
|H^(0)_q̅,i(x,{p_i},q)⟩ = 1/x|S^(0)_q̅,i({p_i})⟩ + |C^(0)_q̅,i({p_i},q)⟩ + x/1-x|S̅^(0)_q̅,i({p_i})⟩
+ ∑_I ( 1/x_I - x - 1/x_I) |R^(0)_q̅,i,I({p_i})⟩ .
The soft-pole and anti-soft pole contributions are given by a similar expression to (<ref>) for the case a_i ∈{q,q̅}:
|S^(0)_q̅,i({p_i})⟩ = ∑_j ≠ i𝐒𝐩𝐥𝐢𝐭^(0)_j,n+1 ← j(p_j,p_i,p_j) |M^(0)({p_i}) |a_i → q
a_j → ã_j⟩ ,
|S̅^(0)_q̅,i({p_i})⟩ = 𝐄_i,n+1∑_j ≠ i𝐒𝐩𝐥𝐢𝐭^(0)_j,n+1 ← j(p_j,p_i,p_j) |M^(0)({p_i}) |a_i → q̅
a_j → ã_j⟩ .
The splitting operator in Eq. (<ref>) corresponds to the transition a_jq̅←ã_j, while that in Eq. (<ref>) to a_jq ←ã_j. The constant contribution, |*⟩C^(0)_q̅,i({p_i},q), corresponds to the subleading term of the soft-anti-quark expansion of the collinear-quark amplitude. An expression for this term analogous to the LBK theorem is not yet known. Hence, it has to be evaluated by using the direct expression Eq. (<ref>) at a single convenient point. The residue contributions are obtained in analogy to Eq. (<ref>):
c_1,…,c_n+1;σ_1,…,σ_n+1R^(0)_q̅,i,I({p_i}) =
( x_I ( 1 - x_I ) )^-12 1/2p_i · P_I∑_σ c M^(0)_I({p_i},{σ_i},{c_i},σ,c) M^(0)_I({p_i},{σ_i},{c_i},σ,c) .
M^(0)_I({p_i},{σ_i},{c_i},σ,c) and M^(0)_I({p_i},{σ_i},{c_i},σ,c) are now the tree-level amplitudes for the respective processes:
0 →∑_j ∈ I a_j(p_j,σ_j,c_j) + q̅(x_I p_i,σ_n+1,c_n+1) + b(- P_I - x_I p_i,σ,c) and
0 →∑_j ∉I
j ≠ i a_j(p_j,σ_j,c_j) + q((1-x_I) p_i,σ_i,c_i) + b̅(P_I + x_I p_i,-σ,c) .
§.§ Collinear convolutions
The convolution of the jet operator with the collinear-gluon amplitude can be evaluated explicitly using Eqs. (<ref>), (<ref>) and (<ref>):
𝐏_g(σ, c) ∫_0^1 x𝐉_i^(1)(x, p_i, q) |H_g,i^(0) (x, { p_i } , q)⟩
= r_Γ/ϵ (1 - ϵ) (1 - 2 ϵ)(- μ^2/s_iq)^ϵϵ^* (q, p_i, σ ) ·ϵ (p_i, -σ) ∑_ c^'𝐏_g(-σ, c^')
{𝐓_i^c^'𝐓_i^c [ -1 - 2 ϵ/1 + ϵ(1 - 3 ϵ + (1 + ϵ) Σ_g,i) |S_g,i^(0)⟩ + (1 - 3 ϵ - (1 - ϵ) Σ_g,i ) |S̅_g,i^(0)⟩
+ (2 - 3 ϵ + ϵΣ_g,i) ( |C_g,i^(0)⟩ + dim(a_i) |S_g,i^(0)⟩) - ϵ/2(3 - Σ_g,i) |L_g,i^(0)⟩
+ ∑_I ϵ/2 x_I^2 (1 - x_I) ( 2x_I -2 x_I Σ_g,i - (2 - x_I - x_I Σ_g,i) _2F_1(1, 1 - ϵ, 3 - 2 ϵ, 1/x_I) ) |R_g,i,I^(0)⟩ ]
+ 𝐓_i^c 𝐓_i^c^' [ 1 - ϵ/1 + ϵ( 3 - 3 ϵ + (1 + ϵ) Σ_g,i) |S_g,i^(0)⟩ + ϵ/2 (3 - Σ_g,i) |S̅_g,i^(0)⟩
- 1/2(4 - 3 ϵ + ϵ Σ_g,i) ( |C_g,i^(0)⟩ + dim(a_i) |S_g,i^(0)⟩) + ϵ/2 (3 - 2 ϵ) (5 - 3 ϵ - (1 - ϵ) Σ_g,i) |L_g,i^(0)⟩
+ ∑_I ϵ/2 x_I^2 ( x_I + x_I Σ_g,i + (2 - x_I - x_I Σ_g,i) _2F_1(1, 1 - ϵ, 3 - 2 ϵ, 1/x_I) ) |R_g,i,I^(0)⟩ ] } ,
where:
r_Γ = Γ^2 (1 - ϵ)Γ (1 + ϵ)/Γ (1 - 2 ϵ) .
The coefficient of the ϵ-pole in Eq. (<ref>) for (anti-)quarks and gluons is provided in Eqs. (<ref>), (<ref>) and (<ref>) in Section <ref>. In order to approximate a finite remainder of a one-loop amplitude in the 't Hooft-Veltman scheme with Eq. (<ref>), it is sufficient to know the ϵ^0 term of the Laurent expansion of Eq. (<ref>):
[𝐏_g(σ, c) e^ϵγ_E∫_0^1 x𝐉_i^(1)(x, p_i, q) |H_g,i^(0) (x, { p_i } , q)⟩]_ϵ^0
= ϵ^* (q, p_i, σ ) ·ϵ (p_i, -σ) ∑_ c^'𝐏_g(-σ, c^') {𝐓_i^c^'𝐓_i^c [ (3 - Σ_g,i - (1 + Σ_g,i) ln(- μ^2/s_iq)) |S_g,i^(0)⟩
+ ( -2 Σ_g,i + (1 - Σ_g,i ) ln(- μ^2/s_iq)) |S̅_g,i^(0)⟩
+ (3 + Σ_g,i + 2 ln(-μ^2/s_iq)) ( |C_g,i^(0)⟩ + dim(a_i) |S_g,i^(0)⟩) - 1/2 (3 - Σ_g,i) |L_g,i^(0)⟩
- ∑_I 1/x_I(1 + Σ_g,i - (2 - x_I - x_I Σ_g,i) ln(1 - 1/x_I)) |R_g,i,I^(0)⟩ ]
+ 𝐓_i^c 𝐓_i^c^' [ (2 Σ_g,i + (3 + Σ_g,i) ln(- μ^2/s_iq)) |S_g,i^(0)⟩ + 1/2 (3 - Σ_g,i) |S̅_g,i^(0)⟩
- 1/2(9 + Σ_g,i + 4 ln(-μ^2/s_iq)) (|C_g,i^(0)⟩ + dim(a_i) |S_g,i^(0)⟩) + 1/6 (5 - Σ_g,i) |L_g,i^(0)⟩
+ ∑_I 1/2 x_I( 5 - 2 x_I + (1 - 2 x_I) Σ_g,i - 2 (1 - x_I) (2 - x_I - x_I Σ_g,i) ln(1 - 1/x_I)) |R_g,i,I^(0)⟩ ] } ,
where we have removed the Euler-Mascheroni constant γ_E as would be done in the MS scheme.
The convolution of the flavour-off-diagonal jet operator with the collinear-quark amplitude can be evaluated explicitly using Eqs. (<ref>) and (<ref>):
𝐏_i (σ_i, c_i) 𝐏_g(σ, c) ∫_0^1 x𝐉̃_i^(1) (x, p_i, q) |H_q̅, i^(0)(x, {p_i }, q) ⟩
= r_Γ/(1 - ϵ) (1 - 2 ϵ)(- μ^2/s_iq)^ϵϵ^*(q, p_i, σ) ·ϵ^*(p_i, σ_i) ∑_σ^' c^'∑_c_i^'𝐏_i (-σ^', c_i^') 𝐏_n + 1 (σ^', c^')
{(T_q^c_i T_q^c)_c^' c_i^' [ 2σ_iσ'|S_q̅,i^(0)⟩ + (1-(2-ϵ)σ_iσ'/ϵ+1/2(3-2ϵ))|S̅_q̅,i^(0)⟩ + (σ_iσ'-1/2(3-2ϵ))|C_q̅,i^(0)⟩
+ ∑_I 1/x_I(2x_I^2 - (1+2x_I)σ_iσ'+1/2(3-2ϵ)+x_I(1-2x_I+2σ_iσ') _2F_1 (1, 1 - ϵ, 2 - 2 ϵ, 1/x_I) )|R_q̅,i,I^(0)⟩]
+(T_q^c T_q^c_i)_c^' c_i^' [(2σ_iσ' - 1+2σ_iσ'/ϵ)|S_q̅,i^(0)⟩
+(σ_iσ'-1/2(3-2ϵ))|S̅_q̅,i^(0)⟩ + (σ_iσ'+1/2(3-2ϵ))|C_q̅,i^(0)⟩
+∑_I1/x_I(2x_I-2x_I^2-(1-2x_I)σ_iσ'-1/2(3-2ϵ)
+(1-x_I)(1-2x_I+2σ_iσ')_2F_1 (1, 1 - ϵ, 2 - 2 ϵ, 1/x_I)) |R_q̅,i,I^(0)⟩]} .
The 𝒪(ϵ^0) term of the Laurent expansion is given by:
[𝐏_i (σ_i, c_i) 𝐏_g(σ, c) e^ϵγ_E∫_0^1 x𝐉̃_i^(1) (x, p_i, q) |H_q̅, i^(0)(x, {p_i }, q) ⟩]_𝒪(ϵ^0)
= ϵ^*(q, p_i, σ) ·ϵ^*(p_i, σ_i) ∑_σ^' c^'∑_c_i^'𝐏_i (-σ^', c_i^') 𝐏_n + 1 (σ^', c^')
{(T_q^c_i T_q^c)_c^' c_i^' [2 σ_i σ^'|S_q̅,i^(0)⟩ + (19/6 - 5 σ_i σ^' + (1 - 2 σ_i σ^') ln(-μ^2/s_iq)) |S̅_q̅,i^(0)⟩ - (1/6 - σ_i σ^') |C_q̅,i^(0)⟩
+ ∑_I ( 1/6 x_I( 1 + 12 x_I^2 - 6 (1 + 2 x_I) σ_i σ^') - x_I (1 - 2 x_I + 2 σ_i σ^') ln(1 - 1/x_I)) |R_q̅,i,I^(0)⟩ ]
+ (T_q^c T_q^c_i)_c^' c_i^' [ - ( 3 + 4 σ_i σ^' + (1 + 2 σ_i σ^') ln( - μ^2/s_iq) ) |S_q̅,i^(0)⟩ - (1/6 - σ_i σ^') |S̅_q̅,i^(0)⟩
+ ( 1/6 + σ_i σ^') |C_q̅,i^(0)⟩ - ∑_I ( 1/6 x_I(1 - 12 x_I + 12 x_I^2 + 6 (1- 2 x_I) σ_i σ^')
+(1 - x_I) (1 - 2 x_I + 2σ_i σ^') ln(1 - 1/x_I) ) |R_q̅,i,I^(0)⟩ ] } .
§.§ Proof based on the expansion-by-regions method
Theorem <ref> has been obtained by applying the expansion-by-regions method <cit.> (see also Refs. <cit.>). The method is anchored in dimensional regularisation, and can be used to expand Feynman diagrams in any parameter. There are three difficulties: 1) identification of contributing regions, 2) appearance of unregulated integrals, 3) application to a large number of diagrams. Problem 1) has been solved for several standard expansions. The soft expansion has been analysed most recently in Refs. <cit.> albeit for soft-photon emissions. The most important observation is the appearance of a collinear region besides the expected hard and soft regions. Although the collinear region has been anticipated already in Ref. <cit.>, the latter analysis has been shown to be incomplete. Irrespective of the listed publications, the identification of contributing regions can nowadays be performed automatically with dedicated tools <cit.>. As far as problem 2) is concerned, it turns out that no unregulated integrals appear in the soft expansion considered here. Finally, problem 3) is alleviated by organising the contributions according to physical intuition.
The three contributing regions, hard, soft and collinear, are rather classes of regions defined by a scaling of the loop momentum w.r.t. the expansion parameter. In each class, an actual region is defined by a loop-momentum routing.
Actually, momentum routing is relevant in all but the hard region. The latter is defined by assuming that each component of the loop momentum is large compared to the expansion parameter. This region is the easiest to analyse. In fact, the respective Feynman integrands are obtained by Taylor expansion in the momentum shifts δ_i and the soft-gluon momentum q. It follows immediately that the hard-region contribution is given by the first term in Eq. (<ref>). This corresponds to Eq. (<ref>) upon replacement of tree-level amplitudes by their one-loop counterparts.
The soft and collinear regions present more subtleties and are analysed below. One important property should already be stressed at this point. Each region has a different d-dimensional scaling w.r.t. to the expansion parameter. Hence, each region is gauge-invariant on its own. We will exploit this property to make the calculations as simple as possible. The only subtle point is that some gauges, e.g. the lightcone gauge, may generate additional singularities and hence additional regions. These unphysical regions must cancel entirely upon summation of the contributions in a given class due to the gauge invariance of the original amplitude. With the choices made below, no unphysical regions appear in the first place.
§.§.§ Soft regions
In any soft region, the loop momentum, l, is assumed be of the order of the soft-gluon momentum, l^μ = λ. A particular soft region is defined by selecting a pair of external partons i,j. We differentiate between flavour-diagonal, Fig. <ref>, and flavour-off-diagonal contributions, Fig. <ref>. In principle, the soft gluon may attach anywhere else on the visible lines in Figs. <ref> and <ref>. However, a scaling argument demonstrates that the shown topologies are the only ones that yield non-vanishing integrals after expansion, since alternative topologies result in scaleless integrals.
The momentum routing in the (i,j)-soft region is specified in Fig. <ref>. The calculation is conveniently performed in the Feynman gauge. The matrix element represented by the shaded circle is expanded in δ_l, l and q just as in Section <ref>. In the case of flavour-off-diagonal diagrams, the expansion is trivial and amounts to setting these parameters to zero. Tensor integrals are reduced to scalar integrals with Passarino-Veltman reduction <cit.>. The diagrams are expressed in terms of a single non-vanishing integral:
I^soft = μ^2ϵ∫d^d l/i π^d/2(p_i + δ_i) · (p_j + δ_j)/[l^2 +i0^+ ] [(l + q)^2 + i0^+] [(p_i + δ_i) · (l + q) + i0^+] [-(p_j + δ_j) · l + i0^+]
= r_Soft/ϵ^24s^(δ)_ij/s^(δ)_iq s^(δ)_jq(- μ^2 s^(δ)_ij/s^(δ)_iq s^(δ)_jq)^ϵ ,
where we have not yet expanded in δ_i, δ_j. r_Soft has been defined in (<ref>) while the invariants s^(δ)_… in (<ref>). The results are summarized in Eqs. (<ref>) and (<ref>). They have all the desired properties: they satsify the Ward identity w.r.t. to the soft-gluon momentum, they are expressed through gauge-invariant reduced scattering amplitudes, the occurring differential operators are consistent with on-shellness and momentum conservation. As expected, each of these properties applies in a single (i,j)-soft region. Notice, however, that momentum conservation requires symmetrisation w.r.t. i and j due to the fact that Eq. (<ref>) is written in a non-symmetric form.
§.§.§ Collinear regions
A particular collinear region is defined by selecting a parton i whose momentum specifies the collinear direction n with n ∝ p_i. An anti-collinear direction n̅, n̅^2 = 0, n̅ ∝ n must also be specified. In principle, the only natural choice is n̅ ∝ q. In the following, we will nevertheless keep n̅ generic albeit normalised to conveniently satisfy n ·n̅ = 12. An arbitrary vector k can now be decomposed as follows:
k = k_+ n + k_- n̅ + k_⊥ , k_±∈ℝ , k_⊥· n = k_⊥·n̅ = 0 , k_⊥^2 ≤ 0 , k^2 = k_+ k_- + k_⊥^2 .
The expanded amplitude will be calculated in the lightcone gauge with gauge vector n̅. The use of a physical gauge simplifies the analysis of the singularity structure of diagrams and is particularly important in the study of collinear radiation. In particular, our gauge choice yields results that do not necessitate derivatives of process-dependent scattering amplitudes. This is at variance with Ref. <cit.>, where tests of a factorisation formulae for soft-photon radiation were performed in the Feynman gauge, which led to the appearance of different jet operators than ours. Finally, the disappearance of n̅ from the final expressions will serve as a test of independence from the particular physical gauge chosen.
The routing of the loop momentum l is specified in Fig. <ref> for the three topologies characteristic of the i-collinear region. The integration measure is given by:
d^d l = 1/2l_+l_-[d-2]l_⊥ .
Expansion in λ is performed according to:
l_+ = 1 , l_⊥ = λ^12 , l_- = λ .
Propagator denominators are, therefore, approximated as follows:
(l+q)^2 + i0^+ ≈ l_+ (l_- + q_-) + l_⊥^2 + i0^+ ,
(l-p_i)^2 + i0^+ ≈ (l_+ - p_i+) l_- + l_⊥^2 + i0^+ ,
(l-p_i+q)^2 + i0^+ ≈ (l_+ - p_i+) (l_- + q_-) + l_⊥^2 + i0^+ .
Expansion of the actual propagators generates, of course, further terms polynomial in q_-, l_- and l_⊥ accompanied by higher powers of the propagator denominators. The part of the integrand represented by the shaded circle in Fig. <ref> must also be expanded according to (<ref>). Hence, this part depends non-trivially on l_+, while any dependence on l_- and l_⊥ is introduced through differential operators ( l_- *l_- )^k_1 ( l_⊥·*l_⊥ )^k_2 with the derivatives evaluated at vanishing l_- and l_⊥. One can factor out p_i+ and q_- from the integrand term-by-term. This is achieved by the change of variable:
l_+ ≡ x p_i+ ,
and the rescalings l_- → q_- l_-, l_⊥^2 → p_i+ q_- l_⊥^2. In consequence, expanded integrals are proportional to ( p_i+ q_- )^-ϵ = ( p_i · q )^-ϵ. Furthermore, both p_i+ and q_- must be present in the propagator denominators without possibility to remove them by loop-momentum shifts, or otherwise a given integral is scaleless.
After expansion, integration over l_- can be performed by closing the integration contour in the upper complex half-plane, and taking residues at:
l_⊥^2 + i0^+/-l_+ , - q_- + l_⊥^2 + i0^+/-l_+ , l_⊥^2 + i0^+/p_+ - l_+ , - q_- + l_⊥^2 + i0^+/p_+ - l_+ .
The first two of the residues contribute only for l_+ < 0, while the second two only for l_+ < p_+. The final integration over l_⊥ effectively only involves (d-2)-dimensional massive vacuum integrals. For this reason, any contribution odd in l_⊥ vanishes.
In the case of collinear-region contributions depicted in Figs. <ref> and <ref> the loop-momentum integration can be performed explicitly. In particular, denoting by σ,c and σ_i,c_i the helicity and colour of the soft-gluon and parton i respectively, there is:
Fig. <ref> = r_Γ( -μ^2/s_iq)^ϵ𝐏_i(σ_i,c_i) 𝐓_i^c ϵ^*_μ(q,p_i,σ) u̅(p_i,σ_i) [ C_F-C_A/1-2ϵγ^μq/2p_i· q
- 1/1-ϵ( 2C_F/1-2ϵ+C_A/ϵ) γ^μn̅/2p_i·n̅ - 2/1-ϵ(C_F/ϵ-C_A/1+ϵ) n̅^μ/p_i·n̅] u̅(p_i,σ_i)|M^(0)⟩ ,
Fig. <ref> = r_Γ( -μ^2/s_iq)^ϵ𝐏_i(σ_i,c_i) 𝐓_i^c ϵ^*_μ(q,p_i,σ) ϵ_β^*(p_i,σ_i)
×{ - C_A [ 1/1-2ϵ( 1/3-2ϵg^μβ q^α/p_i· q + 1/(1-ϵ)ϵg^μβn̅^α/p_i·n̅)
+ 2/(1-ϵ)(1+ϵ)ϵn̅^μ g^βα/p_i·n̅]
+ T_F n_l 2/(1-ϵ)(1-2ϵ)(3-2ϵ)g^μβ q^α/p_i· q}ϵ^*_α(p_i,σ_i)|M^(0)⟩ ,
with r_Γ defined in (<ref>).
The remaining collinear-region contributions require the knowledge of the x-dependence of the part of the integrand represented by the shaded circle in Fig. <ref>. It turns out that no derivatives in l_-, l_⊥ are needed at λ^0, since contributions containing differential operators ( l_- *l_- )^k_1 ( l_⊥·*l_⊥ )^k_2, 2k_1+k_2 ≤ 2 cancel. Hence, integration over l_-, l_⊥ only involves the subdiagrams depicted in Figs. <ref>, <ref> and <ref>. The results are as follows:
Fig. <ref>≡𝐉^ν,c'_q = Γ(1+ϵ)/1-ϵ(- μ^2/s_iq)^ϵ (x(1-x))^-ϵ 𝐏_i(σ_i,c_i) (𝐓_i^c𝐓_i^c' + 1/x i f^cdc'𝐓_i^d )
×ϵ^*_μ(q,p_i,σ) u̅(p_i,σ_i) [ 2 g^μν - 2n̅^μ p_i^ν/n̅· p_i + x ( - γ^μγ^ν + γ^μn̅ p_i^ν/n̅· p_i) ] ,
Fig. <ref>≡𝐉^αν,c'_g = Γ(1+ϵ)/1-ϵ(- μ^2/s_iq)^ϵ(x(1-x))^-ϵ 𝐏_i(σ_i,c_i) (𝐓_i^c𝐓_i^c' + 1/x i f^cdc'𝐓_i^d )
×ϵ^*_μ(q,p_i,σ) ϵ^*_β(p_i,σ_i) [ ( δ_ρ^μ - p_iρn̅^μ/p_i·n̅) (
δ^β_σ-p_iσn̅^β/p_i·n̅) ( g^ρα g^σν - g^ρν g^σα )
+ x g^μβ(g^να-p_i^νn̅^α + p_i^αn̅^ν/p_i·n̅) - 1/1-x(g^μα - p_i^αn̅^μ/p_i·n̅)(g^νβ-p_i^νn̅^β/p_i·n̅)] ,
Fig. <ref>≡J̃_c'c_i' = - Γ(1 + ϵ)/1 - ϵ(- μ^2/s_iq)^ϵ(x(1-x))^-ϵ(T^c T^c_i + i x f^cdc_i T^d )_c' c_i^'
×ϵ^*_μ(q,p_i,σ) ϵ^*_β(p_i,σ_i) p_i ( γ^μγ^β - 2 x g^μβ) .
The contributions of the residues in l_- at the points listed in (<ref>) conspire to cancel unless:
x ∈ [0,1] .
Since Figs. <ref> and <ref> have the structure of Fig. <ref>, one might expect that the results presented in Eqs. (<ref>) and (<ref>) can be obtained by integrating 𝐉^ν,c'_q, 𝐉^αν,c'_g and J̃_c'c_i' with appropriate functions of x. This is indeed the case:
Fig. <ref> = ∫_0^1 x𝐉^ν,c'_q 𝐓_i^c'1/p_i · q( - 1/2γ_νq - 1/x q_ν) u̅(p_i,σ_i)|M^(0)⟩ ,
Fig. <ref> = ∫_0^1 x𝐉^αν,c'_g 𝐓_i^c'1/p_i · q( -(1-2x) g_αν q_β - q_ν g_αβ/x + q_α g_νβ/1-x) ϵ_β^*(p_i,σ_i)|M^(0)⟩
+ n_l ∫_0^1 x[ J̃_c'c_i'q/p_i · q ] T^c_i”_c_i'c'q/p_i · q·ϵ^*(p_i,σ_i”)𝐏_i(σ_i”,c_i”) |M^(0)⟩ .
The choice of the helicity σ_i” in the contribution proportional to n_l in Eq. (<ref>) does not affect the result.
The relevance of Eqs. (<ref>) and (<ref>) becomes apparent after consultation of the expressions for the collinear-gluon and collinear-quark amplitudes, (<ref>), (<ref>), (<ref>) and (<ref>). Clearly, soft-gluon emissions from external lines are correctly accounted for by the convolutions of either 𝐉^ν,c'_q with |*⟩H^(0)_g,i, or of 𝐉^αν,c'_g with |*⟩H^(0)_g,i and J̃_c'c_i' with |*⟩H^(0)_q̅,i. In both cases, it is still necessary to remove the external wave functions of partons i and n+1 from the collinear amplitudes. The convolutions thus provide the entirety of the contribution of the i-collinear region.
At this point we recall what has been proven in Section <ref>, namely that |*⟩H^(0)_g,i satisfies the Ward identity w.r.t. any gluon. Hence, terms proportional to p_i^ν in Eqs. (<ref>), (<ref>) and additionally to p_i^α in Eq. (<ref>) vanish after contraction with the collinear-gluon amplitude. Equivalently, removing n̅-dependent terms from 𝐉^ν,c'_q and 𝐉^αν,c'_g does not affect the i-collinear-region contribution. In consequence, our results do not depend on the anti-collinear direction and the particular physical gauge used to derive them.
The result for the jet operator (<ref>) for a_i = q now directly follows from Eqs. (<ref>) and (<ref>). In order to obtain (<ref>) for a_i = g, it is necessary to first transform Eq. (<ref>) by exploiting the symmetry of the collinear-gluon amplitude w.r.t. gluons i and n+1 together with the Jacobi identity in the form:
( T^c_g T^c'_g + 1/x if^cdc' T_g^d )_c_ic_i' = ( 1-x/x T^c_g T^c_i'_g + 1/x if^cdc_i' T_g^d )_c_ic' .
Finally, Eq. (<ref>) is obtained from Eq. (<ref>) with the help of the replacement:
p_i = - ∑_σ_i v(p_i,-σ_i) u̅(p_i,σ_i) .
§.§.§ Spurious-pole cancellation
Eq. (<ref>) has been obtained with the expansion-by-regions method. Each region, i.e. hard, (i,j)-soft and i-collinear, contributes spurious poles in ϵ due to the unrestricted loop-momentum integration domain. The proof of Eq. (<ref>) is therefore complete when it is shown that all spurious poles cancel. To this end, it is necessary to independently derive an expression for the singularities of the soft-gluon-emission amplitude, expand this result in the soft-gluon momentum and verify agreement with the first two terms of the Laurent expansion of Eq. (<ref>).
The coefficients of the singular ϵ-expansion terms of an n-parton one-loop amplitude |*⟩M_n^(1)({k_i}) are contained in the 𝐈^(1)_n-operator <cit.>:
|M_n^(1)({k_i})⟩ = 𝐈^(1)_n({k_i}) |M_n^(0)({k_i})⟩ + ϵ^0 .
In the purely massless case, there is:
𝐈^(1)_n({k_i}) = - 1/ϵ^2∑_i C_i + 1/ϵ∑_i ≠ j𝐓_i ·𝐓_j ln(- μ^2/2 k_i · k_j + i0^+) + 1/2 ϵ∑_i γ_0^i + n-2/2β_0/ϵ .
The last term proportional to the β-function coefficient β_0 is of ultraviolet origin, while the remaining terms are due to soft and collinear singularities. C_i is either the quadratic Casimir operator of the fundamental representation, C_F = T_F (N_c^2-1) / N_c, N_c = 3, if i is a (anti)-quark, or of the adjoint representation, C_A = 2 T_F N_c, if i is a gluon. The anomalous dimensions are given by:
γ_0^q = -3 C_F , γ_0^g = - β_0 = - 11/3 C_A + 4/3 T_F n_l .
For the setup relevant to the present publication, there is:
|M_g^(1)({p_i+δ_i},q)⟩ = 𝐈^(1)_n+1({p_i+δ_i},q) |M_g^(0)({p_i+δ_i},q)⟩ + ϵ^0
= 𝐈^(1)_n+1({p_i+δ_i},q) ( 𝐒^(0)({p_i},{δ_i},q) |M^(0)({p_i})⟩ + λ) + ϵ^0 ,
with:
𝐏_g(σ,c) 𝐈^(1)_n+1({p_i+δ_i},q) 𝐒^(0)|M^(0)⟩ = 𝐏_g(σ,c) 𝐒^(0) 𝐈_n^(1)({p_i}) |M^(0)⟩
+ ∑_j ( 𝐓_j^c ⊗𝐒^(0)_j 𝐈_n^(1)({p_i}) - 𝐈_n^(1)({p_i+δ_i}) 𝐓_j^c ⊗𝐒^(0)_j
+ ( 1/ϵ^2 C_A δ^cb - 2/ϵ∑_i i f^abc𝐓_i^a ln(- μ^2/s^(δ)_iq) ) 𝐓_j^b ⊗𝐒^(0)_j ) |M^(0)⟩ .
The r.h.s. has already been arranged to exhibit the singularities of the first term in Eq. (<ref>):
𝐒^(0) |M^(1)⟩ = 𝐒^(0) 𝐈_n^(1)|M^(0)⟩ + ϵ^0 .
Moreover, we have only made explicit those arguments of the occurring operators that require careful consideration. Further manipulation yields:
𝐏_g(σ,c) 𝐈^(1)_n+1 𝐒^(0)|M^(0)⟩ = 𝐏_g(σ,c) 𝐒^(0) 𝐈_n^(1)|M^(0)⟩
+ 2/ϵ^2∑_i ≠ j i f^abc𝐓_i^a 𝐓_j^b ⊗( 1 + ϵln( - μ^2 s_ij^(δ)/s_iq^(δ) s_jq^(δ) ) ) 𝐒^(0)_i |M^(0)⟩
- 2/ϵ∑_i ≠ j𝐓_i^c 𝐓_i ·𝐓_j p_i^μ p_j^ν/p_i · p_jiF_μν/p_i · q|M^(0)⟩ + λ .
Contrary to Eq. (<ref>), Eq. (<ref>) does not contain flavour-off-diagonal contributions. Hence, their poles are entirely spurious. We begin the verification of spurious-pole cancellation with the flavour-diagonal contributions.
Expansion of the soft operator (<ref>) acting on the hard matrix element yields:
𝐏_g(σ,c) 𝐒^(1) |M^(0)⟩ = 2/ϵ^2∑_i ≠ j i f^abc𝐓_i^a 𝐓_j^b ⊗( 1 + ϵln( - μ^2 s_ij^(δ)/s_iq^(δ) s_jq^(δ) ) ) 𝐒^(0)_i |M^(0)⟩
+ 2/ϵ ∑_i ≠ j i f^abc𝐓_i^a 𝐓_j^b ⊗1/p_i · p_j( p_i^μ p_j^ν - p_j^μ p_i^ν/p_i · q + p_j^μ p_j^ν/p_j · q) F_μρ ( J_i - 𝐊_i )^νρ |M^(0)⟩ + ϵ^0 .
Part of the flavour-diagonal pole contributions generated by the convolution of the jet operator (<ref>) with the collinear-gluon amplitude (<ref>) is obtained using Eqs. (<ref>) and (<ref>):
𝐏_g(σ,c) ∫_0^1 x∑_i 𝐉_i^(1)( ( 1/x + (a_i) ) |S^(0)_g,i⟩ + |C^(0)_g,i⟩) = - 2/ϵ∑_i ≠ j𝐓_i^c 𝐓_i ·𝐓_j p_i^μ p_j^ν/p_i · p_jiF_μν/p_i · q|M^(0)⟩
- 2/ϵ ∑_i ≠ j i f^abc𝐓_i^a 𝐓_j^b ⊗1/p_i · p_j( p_i^μ p_j^ν - p_j^μ p_i^ν/p_i · q + p_j^μ p_j^ν/p_j · q) F_μρ ( J_i - 𝐊_i )^νρ |M^(0)⟩
+ 1/ϵ∑_i ≠ j( 1 - 2(a_i) ) i f^abc𝐓_i^a 𝐓_j^b ⊗p_i^ρ iF_ρμ/p_i · q( p_j^μ/p_j · p_i + ( p_j/p_j · p_i - q/q · p_i)_σ i 𝐊_i^σμ) |M^(0)⟩
+ ϵ^0 .
If parton i is a gluon, then the soft singularity of the collinear-gluon amplitude at x = 1 yields the remaining flavour-diagonal pole contributions. The result is conveniently obtained by rewriting Eq. (<ref>) in an equivalent form using the Jacobi identity to transform the colour operators and the last of Eqs. (<ref>) to transform the spin operator:
𝐏_g(σ,c) 𝐉_i^(1)(x,p_i,q) = Γ(1+ϵ)/1-ϵ( -μ^2/s_iq)^ϵ( x(1-x) )^-ϵ∑_σ'c'ϵ^*_μ(q,p_i,σ) ϵ_ν(p_i,σ')
×[ i f^cdc'𝐓^d_i ⊗( - g^μν + i 𝐊_i^μν) ] 𝐏_g(σ',c') 𝐄_i,n+1 + terms proportional to (1-x) .
Convolution using Eq. (<ref>) yields:
𝐏_g(σ,c) ∫_0^1 x∑_i ( 1 - 2(a_i) ) 𝐉_i^(1) x/1-x|S̅^(0)_g,i⟩ = - 1/ϵ∑_i ≠ j( 1 - 2(a_i) )
× i f^abc𝐓_i^a 𝐓_j^b ⊗p_i^ρ iF_ρμ/p_i · q( p_j^μ/p_j · p_i + ( p_j/p_j · p_i - q/q · p_i)_σ i 𝐊_i^σμ) |M^(0)⟩ + ϵ^0 .
Clearly, the sum of the r.h.s. of Eqs. (<ref>), (<ref>), (<ref>) and (<ref>) is equal to the r.h.s. of Eq. (<ref>) up to terms of λ and ϵ^0. This completes the proof of Eq. (<ref>) for the flavour-diagonal contributions.
Let us turn to the poles of flavour-off-diagonal contributions in Eq. (<ref>), and prove that the poles generated by the flavour-off-diagonal soft operator (<ref>) are cancelled by the poles generated by the convolution of the jet operator (<ref>) with the anti-soft-pole contribution (<ref>) for a_i ∈{q,q̅} and by the convolutions of the flavour-off-diagonal jet operator (<ref>) with the soft-pole and anti-soft-pole contributions (<ref>) and (<ref>). These three convolutions are given by:
∫_0^1 x x/1-x …,c_i,…,c;…,σ_i,…,σ𝐉_i^(1)S̅^(0)_g,i = 1/ϵ ϵ^*_μ(q,p_i,σ) ∑_j ≠ i∑_σ_i'c_i'∑_σ_j'c_j'∑_σ'c'
ϵ_ν(p_i,σ_i') ( T_a_i^c_i' T_a_i^c )_c_i c'( g^μν1 - 2iK_a_i^μν(p_i) )_σ_iσ'c_j,c';σ_j,σ'𝐒𝐩𝐥𝐢𝐭^(0)_a_ja_i ←ã_j(p_j,p_i,p_j)c_j';σ_j'
×…,c_i',…,c_j',…;…,σ_i',…,σ_j',…M^(0)({p_i}) |a_i → g
a_j → ã_j + ϵ^0
≡1/ϵ∑_j ≠ i∑_σ_i'c_i'∑_σ_j'c_j' J^(1,-1)_a_i a_j ← g ã_j…,c_i',…,c_j',…;…,σ_i',…,σ_j',…M^(0)({p_i}) |a_i → g
a_j → ã_j + ϵ^0 ,
∫_0^1 x 1/x …,c_i,…,c;…,σ_i,…,σ𝐉̃_i^(1)S^(0)_q̅,i = 1/ϵ ϵ^*_μ(q,p_i,σ) ϵ^*_ν(p_i,σ_i) ∑_j ≠ i∑_σ_i'c_i'∑_σ_j'c_j'∑_σ'c'
( T^c T^c_i)_c'c_i'( - g^μν 1 - 2 iK_q^μν(p_i) )_-σ'σ_i'c_j,c';σ_j,σ'𝐒𝐩𝐥𝐢𝐭^(0)_a_jq̅ ← ã_j(p_j,p_i,p_j)c_j';σ_j'
×…,c_i',…,c_j',…;…,σ_i',…,σ_j',…M^(0)({p_i}) |a_i → q
a_j → ã_j + ϵ^0
≡1/ϵ∑_j ≠ i∑_σ_i'c_i'∑_σ_j'c_j'J̃^(1,-1)_a_i a_j ← q ã_j…,c_i',…,c_j',…;…,σ_i',…,σ_j',…M^(0)({p_i}) |a_i → q
a_j → ã_j + ϵ^0 ,
∫_0^1 x x/1-x …,c_i,…,c;…,σ_i,…,σ𝐉̃_i^(1)S̅^(0)_q̅,i = 1/ϵ ϵ^*_μ(q,p_i,σ) ϵ^*_ν(p_i,σ_i) ∑_j ≠ i∑_σ_i'c_i'∑_σ_j'c_j'∑_σ'c'
( T^c_i T^c )_c_i'c'( g^μν 1 - 2 iK_q^μν(p_i) )_-σ_i'σ'c_j,c';σ_j,σ'𝐒𝐩𝐥𝐢𝐭^(0)_a_jq ← ã_j(p_j,p_i,p_j)c_j';σ_j'
×…,c_i',…,c_j',…;…,σ_i',…,σ_j',…M^(0)({p_i}) |a_i → q̅
a_j → ã_j + ϵ^0
≡1/ϵ∑_j ≠ i∑_σ_i'c_i'∑_σ_j'c_j'J̃^(1,-1)_a_i a_j ← q̅ã_j…,c_i',…,c_j',…;…,σ_i',…,σ_j',…M^(0)({p_i}) |a_i → q̅
a_j → ã_j + ϵ^0 .
Substitution of the splitting operators listed in Section <ref> and application of the definitions (<ref>) of the spin operators yields:
J^(1,-1)_q q̅ ← g g = -1/2 p_i · p_j( T^c_i' T^c T^c_j')_c_i c_j u̅(p_i,σ_i) ϵ(p_i,σ_i') ϵ^*(q,p_i,σ) ϵ(p_j,σ_j') v(p_j,σ_j) ,
J^(1,-1)_q g ← g q = -1/2 p_i · p_j( T^c_i' T^c T^c_j)_c_i c_j' u̅(p_i,σ_i) ϵ(p_i,σ_i') ϵ^*(q,p_i,σ) ϵ^*(p_j,σ_j) u(p_j,σ_j') ,
J^(1,-1)_q̅ g ← g q̅ = + 1/2 p_i · p_j( T^c_j T^c T^c_i')_c_j' c_i v̅(p_j,σ_j') ϵ^*(p_j,σ_j) ϵ^*(q,p_i,σ) ϵ(p_i,σ_i') v(p_i,σ_i) ,
J̃^(1,-1)_g q ← q g = + 1/2 p_i · p_j( T^c_j' T^c T^c_i)_c_j c_i' u̅(p_j,σ_j) ϵ(p_j,σ_j') ϵ^*(q,p_i,σ) ϵ^*(p_i,σ_i) v(p_i,-σ_i') ,
J̃^(1,-1)_g g ← q q̅ = -1/2 p_i · p_j( T^c_j T^c T^c_i)_c_j' c_i' v̅(p_j,σ_j') ϵ^*(p_j,σ_j) ϵ^*(q,p_i,σ) ϵ^*(p_i,σ_i) v(p_i,-σ_i') ,
J̃^(1,-1)_g g ← q̅ q = -1/2 p_i · p_j( T^c_i T^c T^c_j)_c_i' c_j' u̅(p_i,-σ_i')ϵ^*(p_i,σ_i) ϵ^*(q,p_i,σ) ϵ^*(p_j,σ_j) u(p_j,σ_j') ,
J̃^(1,-1)_g q̅ ← q̅ g = -1/2 p_i · p_j( T^c_i T^c T^c_j')_c_i' c_j u̅(p_i,-σ_i') ϵ^*(p_i,σ_i) ϵ^*(q,p_i,σ) ϵ(p_j,σ_j') v(p_j,σ_j) .
Bi-spinors depending on -σ_i' are subsequently replaced by bi-spinors depending on +σ_i' according to Eq. (<ref>). The resulting expressions can be further simplified using:
…ϵ^*(q,p_i,σ) … = - 1/2 p_i · p_j…p_j ϵ^*(q,p_i,σ) p_i … or
…ϵ^*(q,p_i,σ) … = - 1/2 p_i · p_j…p_i ϵ^*(q,p_i,σ) p_j … ,
where the dots stand for the factors occurring in Eqs. (<ref>), and the first equality applies if the left factor depends on p_i, while the second equality applies if the left factor depends on p_j. It can now be easily verified using:
∑_σ_i” v(p_i,σ_i”) v̅(p_i,σ_i”) = p_i , ∑_σ_i” u(p_i,σ_i”) u̅(p_i,σ_i”) = p_i ,
∑_σ_j” v(p_j,σ_j”) v̅(p_j,σ_i”) = p_j , ∑_σ_j” u(p_j,σ_j”) u̅(p_j,σ_i”) = p_j ,
that each pole coefficient listed in (<ref>) cancels a respective pole coefficient in Eq. (<ref>). This completes the proof of Eq. (<ref>) for the flavour-off-diagonal contributions.
§.§ Numerical tests
Although theorem (<ref>) has been strictly proven in Section <ref>, it is still a useful and instructive exercise to verify the formulae of Sections <ref>, <ref> and <ref> on actual amplitudes. In this section, we numerically evaluate the ϵ^0 coefficient of the Laurent expansion of |M_g^(1)⟩ for several processes and compare it to the result of the soft expansion. For a stringent test, we consider processes that involve up to six hard partons, both incoming and outgoing, multiple quark flavours and colour-neutral particles. The list can be read off of Figs. <ref> and <ref>.
Let us define the difference between the exact and the approximate amplitude:
Δ_LP/NLP≡1/N∑_singular
colour flows {c}
helicities {σ}|[⟨{c,σ}|M_g^(1)⟩ - ⟨{c,σ}|M^(1)_g⟩_LP/NLP]_𝒪(ϵ^0)/[⟨{c,σ}|M_g^(1)⟩]_𝒪(ϵ^0)| ,
where LP (leading power) stands for soft expansion up to 1/λ, while NLP (next-to-leading power) up to λ^0. The sum runs over all colour-flow and helicity configurations for which the amplitude has a soft singularity. The number of such configurations is denoted by N. The one-loop n-particle amplitudes |M^(1)⟩ as well as their derivatives are calculated with Recola <cit.> linked to Collier <cit.> for the evaluation of tensor and scalar one-loop integrals. For the evaluation of the one-loop (n+1)-particle amplitudes, |M_g^(1)⟩, we instead link Recola to CutTools <cit.> for tensor reduction and OneLOop <cit.> for the evaluation of scalar integrals at quadruple precision. Finally, for the evaluation of the collinear amplitudes, we use Eqs. (<ref>) and (<ref>) implemented by calling AvH <cit.> with replaced spinors and polarisation vectors of the external particles as appropriate. The x-dependence of the collinear amplitudes is obtained at first by rational-function fitting. Subsequently, we verify that the results agree with those obtained by direct evaluation with the formulae from the last paragraph of Section <ref>. A subtlety arises from the fact that amplitudes for different processes are involved in the computation of (<ref>). Indeed, the global sign of the amplitudes depends on the external fermion ordering and the algorithm used. Therefore, for the flavour-off-diagonal contributions, we have to compensate the differences between the software tools by including appropriate signs to obtain the correct result.
Δ_LP/NLP is expected to have the following behaviour:
Δ_LP = (c_0+c_1logλ+c_2log^2λ)λ + 𝒪(λ^2) ,
Δ_NLP = (d_0+d_1logλ+d_2log^2λ)λ^2 + 𝒪(λ^3) .
This behaviour is reproduced for the three example processes in Fig. <ref> as much as numerical precision permits. Fig. <ref> shows, split by helicity configuration, the results for the process:
q(σ_1)+q̅(σ_2)→ g(σ_3)+g(σ_4)+g(q,σ_5),
where q is the soft momentum, and hard-momentum and colour dependence are suppressed for brevity. For most configurations, the test results show a strong improvement between LP and NLP in line with Fig. <ref>. However, in the case σ_3=σ_4≠σ_5, the improvement is less pronounced while still remaining consistent with (<ref>). This spin configuration is distinguished by the fact that it does not contain any logarithms containing the soft momentum through next-to-leading power. For example, the flavour-diagonal soft-region contribution is proportional to the tree-level amplitude of the process:
q(σ_1)+q̅(σ_2)→ g(σ_3)+g(σ_4),
which vanishes if σ_3=σ_4 due to helicity conservation. It is not hard to convince oneself that all flavour-off-diagonal soft-region contributions vanish in full analogy. The flavour-diagonal collinear region does not contribute because the collinear hard function is derived from the subleading collinear behaviour of the process:
q(σ_1)+q̅(σ_2)→ g(σ_3)+g(σ_4)+g(-σ_5),
which follows from the full process definition (<ref>) and the properties of the jet operator. In particular, the occurrence of -σ_5 can be conveniently read off Eq. (<ref>). Again, this process vanishes at tree level for σ_3=σ_4≠σ_5 due to helicity conservation. Finally, the flavour-off-diagonal jet operator is only non-zero if σ_i=σ_5 for a_i=g, i.e. i∈{3,4}, which is not fulfilled for the considered helicity configuration. Altogether, only the hard-region contribution to Eq. (<ref>), 𝐒^(0)|M^(1)⟩, is non-zero for the considered spin configuration. While next-to-next-to-leading-power contributions to the soft expansion are not discussed in the present publication, the behaviour observed in Fig. <ref> shows that one can expect soft logarithms starting to appear there, implying a less-constrained helicity structure. The poorer numerical behaviour is not expected to pose a problem in practical applications because for squared amplitudes summed over colour and helicity, the helicity configurations which contain soft logarithms already at leading power dominate numerically in the soft momentum region.
§ NEXT-TO-LEADING-POWER DOUBLE-COLLINEAR ASYMPTOTICS AT TREE-LEVEL
The collinear-gluon and collinear-quark amplitude constructed in Section <ref> may be used to derive a result for the double-collinear asymptotics of massless tree-level QCD amplitudes that correctly accounts for subleading effects. We consider the collinear limit for partons i and n+1:
k_n+1≡ x p_i + l_⊥ - l_⊥^2/2xq/p_i · q , with l_⊥· p_i = l_⊥· q = 0 ,
k_i ≡ (1-x) p_i - l_⊥ - l_⊥^2/2(1-x)q/p_i · q , and k_j ≡ p_j + l_⊥^2 , j ≠ i .
For a_i = a_n+1 = g there is:
𝐏_i(σ_i,c_i) 𝐏_n+1(σ_n+1,c_n+1) |M^(0)({k_i}_i=1^n+1)⟩ =
𝐏_i(σ_i,c_i) 𝐏_n+1(σ_n+1,c_n+1) [
𝐒𝐩𝐥𝐢𝐭^(0)_i,n+1 ← i(k_i,k_n+1,p_i) |M^(0)({p_i})⟩
+ ( 1-x^2/x + 1-(1-x)^2/1-x𝐄_i,n+1) |S^(0)_g,i({p_i},q)⟩ + ( (1-x) + x 𝐄_i,n+1) |C^(0)_g,i({p_i},q)⟩
+ 1/2∑_I x(1-x)/x_I(1-x_I)( 1/x_I - x + 1/x_I - (1-x)𝐄_i,n+1) |R^(0)_g,i,I({p_i})⟩]
+ [ 1/xq ·ϵ^*(p_i,σ_n+1)/q · p_i𝐏_i(σ_i,c_i) 𝐓_i^c_n+1 + 1/1-xq ·ϵ^*(p_i,σ_i)/q · p_i𝐏_i(σ_n+1,c_n+1) 𝐓_i^c_i] |M^(0)({p_i})⟩
+ l_⊥ ,
with |*⟩S^(0)_g,i({p_i},q), |*⟩C^(0)_g,i({p_i},q) and |*⟩R^(0)_g,i,I({p_i}) defined in Eqs. (<ref>), (<ref>) and (<ref>) respectively. The splitting function acting on |*⟩M^(0)({p_i}) introduces a helicity sum for the intermediate gluon. This sum must be consistent with Eq. (<ref>). We note that the subleading collinear asymptotics requires the subleading soft asymptotics contained in |*⟩C^(0)_g,i({p_i},q).
For a_i ∈{q,q̅}, a_n+1 = g, there is:
𝐏_n+1 (σ_n+1,c_n+1) |M^(0)({k_i}_i=1^n+1)⟩ =
𝐏_n+1(σ_n+1,c_n+1) [
𝐒𝐩𝐥𝐢𝐭^(0)_i,n+1 ← i(k_i,k_n+1,p_i) |M^(0)({p_i})⟩
+ √(1-x)( ( 1/x + 1/2) |S^(0)_g,i({p_i},q)⟩ + |C^(0)_g,i({p_i},q)⟩ + x/1-x|S̅^(0)_g,i({p_i},q)⟩
+ ∑_I ( 1/x_I - x - 1/x_I) |R^(0)_g,i,I({p_i})⟩) ]
+ √(1-x)/xq ·ϵ^*(p_i,σ_n+1)/q · p_i𝐓_i^c_n+1|M^(0)({p_i})⟩
+ l_⊥ .
Finally, for a_i =q, a_n+1 = q̅, there is:
|M^(0)({k_i}_i=1^n+1)⟩ =
𝐒𝐩𝐥𝐢𝐭^(0)_i,n+1 ← i(k_i,k_n+1,p_i) |M^(0)({p_i})⟩
+ √(x(1-x))( 1/x|S^(0)_q̅,i({p_i})⟩ + |C^(0)_q̅,i({p_i},q)⟩ + x/1-x|S̅^(0)_q̅,i({p_i})⟩
+ ∑_I ( 1/x_I - x - 1/x_I) |R^(0)_q̅,i,I({p_i})⟩) + l_⊥ .
Since the splitting proceeds via an intermediate gluon, the occurring helicity sum must be consistent with Eq. (<ref>). The contributions |*⟩S^(0)_q̅,i({p_i}), |*⟩S̅^(0)_q̅,i({p_i}) and |*⟩R^(0)_q̅,i,I({p_i}) are defined in Eqs. (<ref>), (<ref>) and (<ref>) respectively. As remarked at the end of Section <ref>, the contribution |*⟩C^(0)_q̅,i({p_i},q) corresponds to the subleading term of the soft-anti-quark expansion of the collinear-quark amplitude. As we do not provide an explicit expression in terms of |*⟩M^(0)({p_i}) for this contribution, it must be evaluated by using Eq. (<ref>) at a convenient point in x.
§ SUMMARY AND OUTLOOK
This publication contains two novel results. The first one is the general formula for the approximation of a one-loop soft-gluon emission amplitude at next-to-leading power presented in Section <ref>. The second are the general formulae for the approximation of tree-level amplitudes in the collinear limit at next-to-leading power presented in Section <ref>. Both results are limited to massless partons, but allow for the inclusion of arbitrary colour-neutral particles. They are expressed through universal factors and process-dependent gauge-invariant amplitudes. As such, they cannot be further simplified.
It is interesting to note that the tree-level collinear approximations require the knowledge of the tree-level soft approximations, while the one-loop soft approximation requires the knowledge of both the tree-level collinear and soft approximation. We expect this pattern to extend to higher orders, i.e. higher order soft approximations should depend on lower order collinear approximations. In any case, extension of the results to higher orders is one natural direction for future research.
We must point out once more that the provided next-to-leading power approximation for a collinear quark-anti-quark pair requires the subleading soft term of the soft-anti-quark expansion of the collinear amplitude, for which no general formula is known at present. In practice, one can obtain the necessary result by a single evaluation of a suitably prepared amplitude at fixed kinematics. Nevertheless, it would be much more elegant to have an expression similar to the LBK theorem. We leave this problem to future work.
Our results should be extended to massive partons in a next step. On the one hand, this extension should be simpler by not containing collinear regions and flavour-off-diagonal contributions for massive partons. On the other hand, the difference between the leading soft asymptotics for massless <cit.> and massive partons <cit.> suggests that the expression for the soft operator will be much more complex in the massive case.
We would like to thank Daniel Stremmer for help with linking Recola to CutTools and OneLOop. This work was supported by the Deutsche Forschungsgemeinschaft (DFG) under grant 396021762 - TRR 257: Particle Physics Phenomenology after the Higgs Discovery, and grant 400140256 - GRK 2497: The Physics of the Heaviest Particles at the LHC. Diagrams were drawn using JaxoDraw <cit.>.
JHEP
|
http://arxiv.org/abs/2307.00628v1
|
20230702175301
|
A Novel Probe of Supersymmetry in Light of Nanohertz Gravitational Waves
|
[
"Kai Murai",
"Wen Yin"
] |
hep-ph
|
[
"hep-ph",
"astro-ph.CO"
] |
=1
|
http://arxiv.org/abs/2307.02162v1
|
20230705100551
|
The power of photons: Cavity-mediated energy transfer between quantum devices
|
[
"Alba Crescente"
] |
quant-ph
|
[
"quant-ph"
] |
EMORF/S: EM-Based Outlier-Robust Filtering and Smoothing With Correlated Measurement Noise
Aamir Hussain Chughtai, Muhammad Tahir, Senior Member, IEEE, and Momin Uppal, Senior Member, IEEE
The authors are with Department of Electrical Engineering, Lahore University of Management Sciences, DHA Lahore Cantt., 54792, Lahore Pakistan. (email: [email protected]; [email protected]; [email protected])
June 2023
========================================================================================================================================================================================================================================================================================================================================
The coherent energy transfer between a quantum charger and a quantum battery is analyzed. In particular, we study how to improve the direct energy transfer by adding a photonic cavity as a mediator.
We show that the additional degree of freedom given by the photons consistently improves the transfer performances, above all in the off-resonant case, where there is a mismatch in the energy levels.
An experimental feasible way to switch-on and off the interaction between each part of the systems and the possibility of changing the energy levels mismatch will be described, in view of finding the best working setup.
§ INTRODUCTION
In the last decades quantum technologies have assumed a central role in the scientific research worldwide <cit.>. In this framework particular interest has been devoted to the new and fast growing field of quantum batteries (QBs) <cit.>.
Most of the research on QBs has been focussed on finding efficient ways to store energy into a quantum system and release it on demand <cit.>, in order to locally supply energy to miniaturized devices. However, only few works addressed the interesting and still largely unexplored problem of coherent energy transfer between distant quantum systems <cit.>. In this direction, the realization of energy transfer processes in the quantum domain could represent a crucial step towards the creation of a capillary energy network able to connect distant parts of a fully quantum device with improved performances <cit.>.
The aim of this work is to characterize the coherent energy transfer between two quantum systems, focussing on the simple, but experimentally relevant <cit.>, situation of two two-level systems (TLSs) <cit.>, the first playing the role of a quantum charger and the second being the QB.
In the following, we compare the already well known direct energy transfer process with a cavity-mediated one, where photons act as a quantum bus for the energy transfer.
We investigate the stored energy in the different parts of the total system both on-resonance, i.e. when the level spacing of the two TLSs and the frequency of the photons in the cavity are the same, and off-resonance, namely when there is a mismatch in the TLSs level spacing and photons frequency. This latter analysis is justified by the fact that experimentally it is difficult to realize absolutely identical TLSs <cit.>. The main result of this work is the demonstration that the presence of the cavity, as a mediator, allows a faster energy transfer between the charger and the QB, compared to the direct model. In addition, the photons in the cavity allows improving the energy transfer when the system is off-resonance.
A possible experimental feasible way to switch on and off the interaction and the possibility of changing the mismatch in the TLSs level spacing will be considered in order to find the best performances of the device.
This paper is organized as follows. In Section <ref> we introduce the cavity-mediated model and we briefly recall the direct coupling scenario. In Section <ref> the usual figures of merits are considered with particular emphasis on the transferred energy and work done in switching on and off the interaction. Section <ref> enlightens the results obtained in the cavity-mediated model, showing its better performances compared to the direct one. In Section <ref> it is shown how it is possible to control the mismatch between the TLSs, to emulate a process in which the interaction is switched on and off. Finally Section <ref> is devoted to the conclusions.
§ CAVITY-MEDIATED ENERGY TRANSFER MODEL
The setup for the energy transfer between a quantum charger (C) and a QB (B) in presence of a cavity (M) acting as a mediator is shown in Figure <ref>. In order to keep the analysis relevant for possible experimental implementations, both the charger and the QB are modeled as TLSs with ground state |0_ C,B⟩ and excited state |1_ C,B⟩ respectively. Then the free Hamiltonian of the system can be written as
H_0=H_ C+H_ B+H_ M=ω_ C/2σ_z^ C+ω_ B/2σ_z^ B+ω_ Ma^† a,
where ω_ C and ω_ B are the energy separation of the charger and QB respectively and ω_ M is the frequency of the photons in the cavity. Here, σ_z^i is the Pauli matrix along the ẑ direction acting on the i=C,B space and a (a^†) is the annihilation (creation) operator of the photons.
The interaction Hamiltonian of the cavity-mediated model is represented experimentally by two superconducting qubit (C and B) interacting with a LC resonator (M) <cit.> and can be written as
H_ int,m^(t)= gf(t)[a^†(σ_-^ C+σ_-^ B)+a(σ_+^ C+σ_+^ B)],
where to simplify the problem we have assumed the same coupling constant g both between the mediator and the charger and between the mediator and the QB. Here, the apex (t) indicates the parametric dependence of the Hamiltonian on time and σ_±=(σ_x±σ_y)/2 are the spin ladder operators, with σ_x,y the Pauli matrix along the x̂, ŷ directions. Moreover, f(t) is a dimensionless time dependent function which has been introduced in order to take into account the switching on and off of the interaction. Its precise shape will be specified later.
In order to simplify the solution of the dynamics, we have considered the Hamiltonian in Eq. (<ref>) in the rotating-wave approximation (RWA) <cit.>, where a constraint on admissible values for the coupling constant g ≲ 0.1ω_ C, B is imposed.
Notice that this approximation does not represent a major limitation due to the fact that most of the experimental realizations of such quantum systems well fits into this regime <cit.>.
The complete Hamiltonian for the cavity-mediated model is then given by
H_ m^(t)=H_ C+H_ B+H_ M+H_ int,m^(t).
In the following we will investigate both the resonant ω_ C=ω_ M=ω_ B and off-resonant regime ω_ C=ω_ M=αω_ B, with α a positive real parameter.
Notice that it is important to consider off-resonance conditions since experimentally it is difficult
to realize identical TLS in solid state platforms <cit.>.
§.§ A reference model: the direct coupling
Here we briefly recall the direct coupling model between the charger and the QB, already analyzed in Ref. <cit.>, that will be considered in the following as a comparison with the cavity-mediated case. Under the assumption of a local (short range) and direct capacitive coupling between the TLSs the interaction Hamiltonian in the RWA assumes the following form <cit.>
H_ int,d^(t)= gf(t)(σ_-^ Cσ_+
^ B+σ_+^ Cσ_-^ B),
where g is the coupling constant between the TLSs and f(t) is the same time dependent function introduced before. Consequently the complete Hamiltonian for the direct energy transfer is
H_ d^(t)= H_ C+H_ B+H_ int,d^(t).
§ FIGURES OF MERIT
The main task of this Section is to characterize the energy transfer process between the charger and the QB, taking also into account the switching on and off of the interactions.
§.§ Stored energy and transfer time
The stored energy at time t in i=C, B, M is given by
E_ i(t)≡ Tr{ρ(t) H_i}- Tr{ρ(0) H_i},
where Tr{…} represents the conventional trace operation, ρ(0)=|ψ(0)⟩⟨ψ(0)| is the total density matrix of the system at the initial time t=0 and ρ(t) is the time evolved density matrix according to the Hamiltonian in Eq. (<ref>). In the following we are interested in studying the energy transfer between a full charger, whose initial state is |1_ C⟩, and an empty QB, with initial state |0_ B⟩. For the cavity we will consider as initial state Fock states with n photons, namely |n⟩. Consequently the total initial state assumes the form
|ψ(0)⟩=|1_ C,0_ B,n⟩.
In analogy to what done in Eq. (<ref>) it is also useful to consider the energy associated to the interaction term, defined as
E_ int(t)≡ Tr{ρ(t) H_ int, m^(t)}- Tr{ρ(0) H_ int, m^(0)}.
Notice that, due the chosen initial state in Eq. (<ref>) and the form of H_ int,m^(t) in Eq. (<ref>), the condition
Tr{ρ(0) H_ int, m^(0)}=0 is always verified,
which further simplifies Eq. (<ref>).
Moreover, we define
E_ B, max≡ E_ B(t_ B, max),
namely the first local maximum achievable value of the stored energy in the QB, which occurs at the shorter charging time t_ B, max and with
E̅_ C≡ E_ C(t_ B, max),
the value of the energy in the charger at the same time. Indeed, as we will show below, while at resonance all the maxima are obtained at the same times, out of resonance this could not be the case.
Similar considerations on the stored energy and initial state of the system can be done for the direct coupling case, see Refs. <cit.>.
§.§ Average work done to switch on and off the interaction
To fully characterize energy transfer processes it is necessary to consider also the power employed for the switch on and off of the interaction.
Here, we recall its formal definition
P(t)≡d/dt[ Tr{ρ(t)H^(t)_i}]= Tr{ρ(t) ∂ H_int,i^(t)/∂ t},
where i=m, d for the cavity-mediated model and for the direct one, respectively.
The corresponding average work W(t) at a given time t is then given by
W(t)=∫_0^t dt'P(t').
Specifying to the cases of cavity-mediated and direct coupling the powers can be written as
P_ m(t) = dE_ C(t)/dt+d E_ B(t)/dt+d E_ M(t)/dt+d E_ int,m(t)/dt
P_ d(t) = dE_ C(t)/dt+d E_ B(t)/dt+d E_ int,d(t)/dt.
The corresponding works obtained from. Eq. (<ref>) fulfill the following conservation energy relations
W_ m(t) = E_ C(t)+E_ B(t)+E_ M(t)+E_ int,m(t)
W_ d(t) = E_ C(t)+E_ B(t)+E_ int,d(t).
In both cases, reintroducing the off-resonance parameter α, it is possible to obtain simplified forms for the work <cit.>, leading to a common equation for both models
W(t)=(1-α)E_ B(t)+E_ int,i(t),
with i=m, d.
§.§ Functional form of the switch on and off function
Here, the form of the switching on and off function f(t) in Eq. (<ref>) is specified. From now on, the following functional form is considered (see Figure <ref>)
f(t)= erf(t-τt_0)-erf(t-2τt_0)/2erf(τ2t_0),
which describes a smooth switching on and off of the interaction between the charger and the mediator and the mediator and the QB in the cavity-mediated model or between the two quantum systems, charger and QB, in the direct one.
From Figure <ref> it can be seen that the parameter τ controls the time window where the interaction is active, while t_0 is the width of the switching ramp.
By controlling the parameters τ and t_0 it is then possible to turn off the interaction when the first maximum of the energy stored in B is achieved, meaning that at time t_ B, max the interaction Hamiltonian H_ int^(t) is switched off.
§ RESULTS
In this Section we report and discuss the main results for the cavity-mediated energy transfer and compare them with the direct model. All the results are obtain through an exact numerical diagonalization (see Ref. <cit.> for more details).
Notice that in the following we will consider the composite system as a closed quantum system, meaning that dissipative effects related to relaxation and dephasing phenomena are not taken into account. This is possible when the typical relaxation t_r and dephasing t_φ times are longer with respect to the considered evolution time t, i.e. t_r, t_φ≫ t <cit.>.
§.§ Enhancing the energy transfer performances
In Figure <ref> the behaviour in time of the different terms involved in Eqs. (<ref>) and (<ref>) is shown. The direct energy transfer between the charger and the QB is reported in panels (a) and (b), in order to compare it to the cavity-mediated case, in panels (c) and (d). The analysis is brought up both on-resonance (α=1) and off-resonance for the representative value α=0.8 for a given coupling constant g=0.05ω_ B and a number of photons n=10 in the cavity.
From Figure <ref> (a) we observe that, when the system is on-resonance, the energy E_ C(t) stored into the charger completely goes into the QB. Moreover, during this process the energy associated to the interaction E_ int(t) and the work W(t) remains zero, as expected from Eq. (<ref>).
Different is the situation when the system is off-resonance [see Figure <ref> (b)]. Here, the charger loses only a fraction of its energy ∼ 15% and transfers it to the QB. In this case, another contribution to the energy transfer process is given by the interaction term, that together with the charger, allows reaching ∼ 20% of the full charge in the QB.
Notice that off-resonance a finite amount of work is done in the transfer process, confirming the overall conservation discussed in Eq. (<ref>).
For what concerns the cavity-mediated case, it is possible to exploit the additional degree of freedom offered by the number of photons n to improve the performances of the energy transfer in the composite system.
When we consider a higher number of photons, e.g. n=10 in panel (c), we can observe that on-resonance it is possible to obtain a complete and faster energy transfer process compared to the one obtained in the direct case. More relevant is the impact of the richer structure of the mediator in the off-resonant case [see panel (d)]. Here the charger releases almost all its energy. Even if a fraction of this energy remains trapped into the mediator, it is possible to charge the QB more then ∼ 71 % in a very short time. In this case the mediator plays the role of facilitator for the energy transfer, leading to a major improvement with respect to the direct case and giving an important impact for practical applications.
§.§ The advantage of using a large number of photons
The advantage of using a cavity as a mediator for the energy transfer is further enhanced by increasing the number of photons n into the cavity. This can be seen from Figure <ref>, where the maximum of the stored energy in the QB [panel (a)] and the energy transfer times [panel (b)] are reported as a function of n.
From panel (a) we can see that by increasing the number of photons in the cavity it is possible to consistently improve the energy transferred to the QB also in the off-resonant case.
In fact, for α=0.8 at large n we obtain a charging of the QB exceeding ∼ 80 %, which is even better than the one reported for n=10 in Figure <ref> (d).
In principle, even if experimentally infeasible, it is possible to reach, even off-resonance, the maximum charge of the QB for large n, with E_ B, max(n→ +∞)→ω_ B (not shown).
The advantages in using a larger number of photons can also be seen from the charging times. Indeed, at large values of n the energy transfer time scales as t_B, max∝ n^-1/2 both on- and off-resonance [see asymptotes in Figure <ref> (b)].
§ CONTROLLING THE TWO-LEVEL SYSTEMS MISMATCH
To conclude we discuss a protocol that allows controlling the TLSs mismatch, smoothly changing the energy separation of the TLSs. This is a way to mimic the switch on and off of the interaction introduced in Section <ref>, since when the TLSs are far off-resonance the energy transfer is greatly suppressed, while when they are on-resonance it is generally promoted.
Moreover, this kind of protocol is well-controlled in experiment concerning qubits <cit.>.
In the following we are going to rewrite the Hamiltonian for the cavity-mediated model in Eq. (<ref>). Firstly we need to remap the energy separation of the TLS ω_ C and ω_ B as
ω_ C(t)=αω_ Bg(t) ω_ B(t)=ω_ Bg(t),
where α >0 is the mismatch parameter previously introduced and g(t) is a smooth function that allows the tuning of the TLSs energy separation of the following form
g(t)=(1-α)[erf(t-τt_0)-erf(t-2τt_0)]+2αerf(τ2t_0)/2erf(τ2t_0),
where the parameters τ and t_0 are the ones introduced in Eq. (<ref>). Examples of the g(t) function are reported in Figure <ref> (a), where we can observe that the TLSs are initially off-resonance, then after a time τ they get on-resonance and when the QB reaches its maximum energy they are again put off-resonance.
Moreover in this kind of protocol the interaction between the charger and the mediator and between the mediator and the QB is time independent, assuming the form
H̃_ int,m= g[a^†(σ_-^ C+σ_-^ B)+a(σ_+^ C+σ_+^ B)].
As a consequence the complete Hamiltonian becomes
H̃_ m^(t)=ω_ C(t)/2σ_z^ C+ω_ B(t)/2σ_z^ B+H̃_ int,m.
By diagonalizing and numerically solving the Hamiltonian in Eq. (<ref>), using the methods introduced in Ref. <cit.>, we plot the stored energy of the QB in Figure <ref> (b). As we can observe, the more the charger and the QB are taken off-resonance (see dashed magenta curve for α=0.8) the more stable the storing inside the QB is. This is not true when the mismatch in the TLSs is smaller (α=0.2), leading to a continuous transfer of the energy between the charger and the QB and vice versa.
This proves that tuning the coupling between the different parts of the system is a more effective way for a stable storing of the energy inside the QB, compared to changing the mismatch in the energy spacing of the TLSs.
§ CONCLUSIONS
The analysis of coherent energy transfer processes between a quantum charger and a QB, both modeled as TLSs, has been discussed. Starting from the well known case of direct energy transfer, we have shown how the performances can be improved adding a mediator, in our case a photonic cavity.
Moreover, to consider an experimentally feasible model, we have introduced the possibility of switching on and off the interaction between the parts of the system. This also allows considering the important quantity of the work needed to do such operation.
Considering both on- and off-resonance we have analyzed the stored energy in each part of the system. On-resonance the performances of the direct and cavity-mediated model are similar: both models allow a complete energy transfer, but the photons in the cavity lead to a faster process. In both cases the work in this regime is null, confirming what obtained in Ref. <cit.>.
Different and more interesting are the results obtained off-resonance. Here, the direct energy transfer has very poor performances compared to the cavity-mediated one. In fact by adding more and more photons inside the cavity allows regaining an almost complete energy transfer.
At the end of our work we have also considered a protocol that allows changing the mismatch in TLSs energy level. Here we have proved that this scenario leads to greater instability compared to controlling the coupling constant between each part of the system.
The author acknowledge the support of the European Union-NextGenerationEU through the "QUantum Busses for coherent EneRgy Transfer (QUBERT)" project, in the framework of the Curiosity Driven Grant 2021 of the University of Genova.
100
Riedel17 Riedel M. F. et al.
Quantu Sci. Technol.22017030501.
Raymer19 Raymer M. G. et al.
Quantu Sci. Technol.42019020504.
Alicki13 Alicki R. Fannes M.
Phys. Rev. E872013042123.
Binder15 Binder F. C. et al.
New J. Phys.172015075015.
Campaioli17 Campaioli F. et al.
Phys. Rev. Lett.1182017150601.
Ferraro18 Ferraro D. et al.
Phys. Rev. Lett.1202018117702.
Andolina18 Andolina G. M. et al.
Phys. Rev. B982018205423.
Crescente20 Crescente A. et al.
New. J. Phys.222020063057.
Crescente20b Crescente A. et al.
Phys. Rev. B1022020245407.
Santos21 Santos A. C.
Phys. Rev. E1032021042118.
Delmonte21 Delmonte A. et al.
Entropy232021612.
Gemme22 Gemme G. et al.
Batteries8202243.
Benenti22 Shaghaghi V. et al.
Quantum Sci. Technol.7202204LT01.
Shaghaghi22 Shaghaghi V. et al.
preprint arXiv:2212.13417.
Erdman22 Erdman P. A. et al.
preprint arXiv:2212.12397.
Farina19 Farina D. et al.
Phys. Rev. B992019035421.
Scarlino19 Scarlino P. et al.
Nature Comm.1020193011.
Sillanpaa07 Sillanpää M. A. et al.
Nature4492007438-442.
Dicarlo09 DiCarlo L. et al.
Nature4602009240-244.
Weiss Weiss U.
Quantum Dissipative Systems 4th edn, (Singapore: World Scientific, 2012).
Paladino08 Paladino E. et al.
Phys. Rev. B772008041303(R).
Sassetti96 Sassetti M. et al.
Phys. Rev. B541996R5203.
Krantz19 Krantz P. et al.
Appl. Phys. Rev.62019021318.
Schweber67 Schweber S.
Ann. Phys.411967205.
Graham84 Graham R. Höhnerbach M.
Z. Phys. B: Condens. Matter571984233.
Schleich Schleich W. P.
Quantum Optics in Phase Space, (Berlin: Wiley-VCH, 2021).
Majer05 Majer J. B. et al.
Phys. Rev. Lett.942005090501.
Niskanen07 Niskanen A. O. et al.
Science31620075825.
Crescente22 Crescente A. et al.
Phys. Rev. Research42022033216.
Devoret13 Devoret M. H. Schoelkopf R. J.
Science33920131169.
Wendin17 Wendin G.
Rep. Prog. Phys.802017106001.
Carrega20 Carrega M. et al.
New J. Phys.222020083085.
Kafri17 Kafri D. et al.
Phys. Rev. A952017052333.
Arute19 Arute F. et al.
Nature5742019505-510.
|
http://arxiv.org/abs/2307.00283v1
|
20230701093352
|
Experimental and numerical study of the effect of polymer flooding on sand production in poorly consolidated porous media
|
[
"Daniyar Kazidenov",
"Sagyn Omirbekov",
"Meruyet Zhanabayeva",
"Yerlan Amanbek"
] |
cs.CE
|
[
"cs.CE"
] |
Null controllability of a kind of n-dimensional degenerate parabolic equation
Hongli Sun^1, Yuanhang Liu^2, Weijia Wu^2,*, Donghui Yang^2
^1 School of Mathematics, Physics and Big data, Chongqing University of Science and Technology,
Chongging 401331, China
^2 School of Mathematics and Statistics, Central South University, Changsha 410083, China
===================================================================================================================================================================================================================================================================================================
Polymer flooding is crucial in hydrocarbon production, increasing oil recovery by improving the water-oil mobility ratio. However, the high viscosity of displacing fluid may cause problems with sand production on poorly consolidated reservoirs.
This work investigates the effect of polymer injection on the sand production phenomenon using the experimental study and numerical model at a laboratory scale.
The experiment uses an artificially made sandstone based on the characteristics of the oil field in Kazakhstan. Polymer solution based on Xanthan gum is injected into the core to study the impact of polymer flooding on sand production. The rheology of the polymer solution is also examined using a rotational rheometer, and the power-law model fits outcomes. The fitting parameters are used for the numerical model as an input. We observe no sand production during the brine injection at various flow rate ranges. However, the sanding is noticed when the polymer solution is injected. More than 50% of cumulatively produced sand is obtained after one pore volume of polymer sand is injected.
In the numerical part of the study, we present a coupling model of the discrete element method (DEM) with computational fluid dynamics (CFD) to describe the polymer flow in a granular porous medium. The numerical model is performed considering particle size distribution, porosity, and cementation behavior of the sample associated with Kazakhstan reservoir sandstone. In the solid phase, the modified cohesive contact model characterizes the bonding mechanism between sand particles. The fluid phase is modeled as a non-Newtonian fluid using a power-law model. The forces acting on a particle by the fluid are calculated considering the rheology of non-Newtonian fluid.
We verify the numerical model with the laboratory experiment by comparing the dimensionless cumulative mass of produced particles. The numerical model observes non-uniform bond breakage when only a confining stress is applied. On the other hand, the injection of the polymer into the sample leads to a relatively gradual decrease in bonds. The significant difference in the pressure of the fluid results in its higher velocity, which causes intensive sand production at the beginning of the simulation. During the transient phase in sand production, the fluid's viscosity is lower at the outlet region, where the unbonded particles significantly predominate. The ratio of medium-sized produced particles is greater than the initial ratio of those before injection and makes the most significant contribution to the total mass of sand production.
§ INTRODUCTION
Weakly consolidated sandstone reservoirs occur in a significant fraction of the world's oil and gas reserves, making them sensitive to sand production. The destruction of the material is an essential step in the grinding process. Drilling operations, shut-in and start-up cycling, operating conditions, reservoir pressure reduction, and water weakening can all contribute to the destruction of sandstone near perforations and wells. Separation of the sand particles is also facilitated by the high-pressure gradient caused by the fluid flow. In addition, the fluid flow is responsible for transporting and forming loose sand particles or loose sand lumps into the wellbore. Therefore, the sand production negatively affects the well production by damaging the equipment due to the erosion, reducing the flow diameter and flow rates, and clogging of wellbore and surface facilities <cit.>. Depending on the behavior of the sand production rate, the process can be in three different regimes: transient, continuous and catastrophic sand production <cit.>. While during the transient sand production the total mass of produced sand decreases with time, during the continues regime the mass of produced sand has constant pattern and does not change over the time. The catastrophic sand production causes the failure of the wellbore by sudden blocking the lines.
Oil production stages such as primary and secondary methods (i.e., spontaneous imbibition and water injection, respectively) can affect sand production during oil recovery.
Terzaghi conducted the first scientific studies on the sanding problem in 1936, and he was the first to notice a sand arch near a bottom hatch in a box filled with sand <cit.>. Terzaghi's experiment was enhanced by <cit.>, who discovered a link between the added fluid flow and the creation of sand arches. Later experiments with various hydrocarbon production parameters indicated that sand production is influenced by stress anisotropy, stress level, saturation, injection fluid, and rock formation material. Many researchers have also experimented with a variety of fluids, including water <cit.>; diesel fluid <cit.>; brine <cit.> and paraffin oil <cit.>.
Moreover, it is essential to note that each field is unique, and the sand production can vary depending on the specific reservoir characteristics, such as the geomechanical properties of the formation <cit.>, the location of weak zones <cit.>, and reservoir fluid type <cit.>. Precisely, <cit.> experimentally investigated the sand production behavior in poorly consolidated formations. The artificially synthetic sample that replicates the reservoir sandstone from Kazakhstan, was created with sodium silicate cementing agent. The patterns of sand production that were observed in the laboratory were comparable to the real behavior of sand production in the local fields, where a critical flow rate initiated sand production and a small burst of sand occurred with the subsequent increase in flow rate. <cit.> performed experimental and numerical analysis to understand the change in the permeability of the plastic zone surrounding the wellbore, which in turn provided valuable insight into the mechanical and filtration characteristics of the rock. The findings demonstrated that the average permeability through the sample dropped as the flow rate increased. <cit.> developed a numerical modeling to investigate the effect of various reservoir fluids on sand production. It was observed that heavy oil produced more sand than light oil, due to its greater transport capacity and its creation of a more uniform particle velocity trajectory pattern. All these previous studies on Kazakhstani oil fields considered the effect of only Newtonian fluids on sand production.
Polymer flooding is a method of oil production that uses synthetic or bio polymers to increase oil production by improving the mobility ratio between displacing fluid and oil <cit.>. Hence, the oil displacement by aqueous solution occurs due to increased water viscosity by adding polymers. The polymer flooding can cause significant challenges on sand production control, specifically, when oil is recovered from poorly consolidated reservoirs. Nevertheless, the effect of polymer injection (i.e., enhanced oil recovery, EOR method) on sand production is still unclear since only a few research have been conducted in this field. For instance, <cit.> pointed out a severe sand production problem during the alkali/surfactant/polymer (ASP) flooding in the poorly consolidated Shengli Oil Field, where the ASP flooding produced ten times more sand than common water flooding. <cit.> also explored the sanding phenomenon in the field test when they injected foam. The foam was generated by co-injecting Nitrogen and surfactant-polymer solution. They also noticed the production of sand in central wells where after adding perforation and sanding control, water cut declined, and oil production rose significantly. Some authors mentioned that polymer flooding could minimize sand production by decreasing the water hammering effect due to higher viscosity of polymers <cit.>. Therefore, the sweeping efficiency is improved which stabilizes the pressure in the formation.
Numerous approaches have been also developed to understand the mechanisms of sand production including analytical studies, laboratory experiments and numerical models. While conventional laboratory testing and analytical models can only predict the beginning of sand production, more advanced laboratory experiments and numerical models might predict volumetric sand production rate <cit.>. For instance, <cit.> proposed an analytical model to predict the sand production rate, which works only in continues regime. Derived empirical correlations between loading factor, Reynold’s number and the sand production rate using laboratory experimental data showed a good match with the results of field data. <cit.> experimentally developed a mechanical-erosion model to predict volumetric sand production rate in weak and compactive sandstones. Externally imposed stress caused decohesioning and plasticification of a zone around the cavity where erosion occurred due to fluid flow. The experimental results showed that the sand production appears to remain constant over time under constant external stress and flow rate. Coupling of computational fluid dynamics (CFD) and discrete element method (DEM) <cit.> is one of the common numerical approaches used to model sand production phenomena, in which the sand grains are considered as discrete particles, while the fluids are considered as a continuous phase. <cit.> investigated sand erosion in a weakly cemented sandstone using a CFD-DEM coupling. An increase of the axial compaction accelerated the sand erosion, and an increase of radial confining stress resulted in continuous sand production. Moreover, the fluid- particle interaction force were the primary cause of sand erosion. CFD-DEM is also helpful to understand the interaction mechanism of sand particles with fluid that is affected by many factors such as different fluid flow rates <cit.>, particle size distribution <cit.>, cementation behaviour <cit.> and wellbore geometry <cit.>. Moreover, CFD-DEM can be employed to simulate the fluid flow and motion of sand particles in complex geometries including gravel packs <cit.> and sand screen <cit.>.
The JKR model, which was initially proposed by <cit.>, is a commonly used technique in CFD-DEM simulations to describe the bonding mechanism of adhesively contacted sand grains <cit.>.
The force involved in the adhesion of the particles is expressed by the surface energy density and the contact area of those particles. JKR-based DEM modeling is a popular instrument to investigate various applications such as compaction of powders <cit.>, micromanipulation <cit.>, behaviour of particles in microfluidics <cit.> and analysis of soil mechanics <cit.>.
<cit.> proposed a modified version of the JKR model, which was applied to investigate the bonding behavior of poorly consolidated cemented sandstone from a Kazakhstan reservoir. In contrast to the original JKR model where the bonds break as they pass the maximum applied force, the modified JKR model defines the bond breakage at a maximum force. The bonded particles fracture in brittle manner due to the absence of tensile bonds. Furthermore, the broken particles are unable to create new bonds and modeled by the Hertz contact model <cit.>. The modified version of JKR model is successfully validated and used to investigate the the bond breakage in sand production <cit.>, triaxial compression test <cit.> and cone penetration test <cit.>.
Recently, <cit.> presented another application of the CFD-DEM for non-Newtonian fluids in which they predicted fluid-induced fractures in granular media due to polymer injection by accounting water quality and polymer rheology. The polymer injection model assumes a homogeneous porous medium in a granular system containing spherical particles. The authors use a power law rheological model to simulate polymer flow behavior. <cit.> employed CFD-DEM to investigate the breakup dynamics of solid fillers in polymer dispersing medium. While the non-Newtonian fluid with shear thinning behaviour is characterized by a power-law relationship, the solid fillers are described as micron-sized spherical particles interacting by van der Waals forces. However, a complete understanding of sand production in poorly consolidated porous media by polymer flooding is still unclear due to the complex structure of porous media.
To our knowledge, the experimental and numerical assessment of sand production by polymer flooding has received less attention because of the complex (non-Newtonian) behavior of polymer solution and coupling with poorly consolidated porous media. Specifically, it is relevant for some Kazakhstani oil fields where porous media is weakly consolidated.
In this study, we focus on assessing sand production by polymer flooding in poorly consolidated formations of the Ustyurt-Buzachi sedimentary basin located in western Kazakhstan, see Figure <ref>. The studies on sand production in the areas mentioned earlier by tertiary oil recovery methods are still unclear since most of the oil recovery methods fall under primary or secondary production techniques. Consequently, the main objective of this investigation is experimentally and numerically show the impact of sand production by polymer flooding, specifically by a non-Newtonian fluid. The ultimate goal is to assess and prevent the risk of sand production by polymer flooding for future projects.
This paper presents the experimental and numerical study of polymer flooding in poorly consolidated porous media at a laboratory scale. The laboratory experiment is conducted using an artificial sandstone based on the particle size distribution of a Kazakhstani oil field to investigate the effect of polymer flooding on sand production rate. The viscosity of the polymer solution is measured using a rotational rheometer and is fitted by the power law model. In the numerical simulation, while the modified JKR-based DEM model characterizes the cementation behavior of particles, the power-law-based CFD model describes the polymer flow as a non-Newtonian fluid. The particle-fluid interaction forces are computed considering the rheology of non-Newtonian fluid. The numerical model is verified by the results obtained from the laboratory experiment. The fluid velocity and viscosity, cumulative sand production rate, particle bonding behavior, and particle size distribution of produced particles are examined in the effect of polymer injection on sand production.
This work is organized as follows: in Section <ref>, we present experimental part of the work such as selecting the materials, experimental setup and procedure. The numerical part including governing equations and numerical simulation setup are presented in Section <ref>. In Section <ref>, we demonstrate and discuss the results obtained from the experiment and numerical simulation. Finally, the main findings are summarized in Section <ref>
§ EXPERIMENTAL STUDY
We conducted experiments with poorly consolidated porous media using a core flood system to study the impact of polymer injection on sand production. Core samples were prepared at laboratory conditions following the field's Particle Size Distribution (PSD) and mechanical properties. This section presents the materials and experimental methods used to conduct the experiments. All materials were prepared according to the characteristics of the North Buzachi oil field.
§.§ Materials
§.§.§ Porous media
The artificial core samples were prepared using quartz sand provided by KazQuartz company. The sand is sieved and followed to obtain the same PSD as the reservoir sand, as presented by Rakhimzhanova and coworkers (Rakhimzhanova, 2021; Shabdirova, et al., 2016). The PSD of the reservoir sand is presented below in Figure <ref>.
We used Portland cement, gypsum, and demineralized water to prepare the sandstones. An artificial sandstone with the oil field's mechanical properties was designed following Bisserik et al. (2021) method. The properties of the sandstone are tabulated in the following table.
To prepare sandstone with a certain consolidation the materials presented in Table <ref> were very mixed and molded into a 120 mm long plastic cylinder with an internal diameter of 38.1 mm. The sandstones were dried under humidity of 80% and at 60°C for 72 hours. The prepared sandstone presented in Figure <ref> are then used for core flooding experiments.
The porosity of sandstone is measured by a Helium porosimeter (Vinci technology). We saturated the cores with formation brine under pressure of 1500 psi for 48 hours to establish initial wetting conditions using a Manual Saturator (Vinci technology). The porosity was also rechecked by the dry/wet weight method. The measured properties of the artificial cores are presented in Table <ref>.
§.§.§ Formation brine
The formation brine was prepared according to the chemical composition of water from the North Buzachi oil field. The total salinity of formation brine from the field is 93000 ppm. The formation brine contains sodium, calcium, magnesium, and chloride ions. The formation brine was prepared by adding salts in certain concentrations: NaCl (63.24 g/L), CaCl_2 (12.34 g/L), and MgCl_2 · 6H_2O (37.52 g/L) to the distilled water and stirring at 750 RPM for 10 minutes. The density of formation brine is equal to 1.1166 g/L.
§.§.§ Polymer solutions
Xanthan Gum (XG) is employed as a viscous fluid to displace the initially saturated media by brine and investigate the sand production phenomenon. The XG powder is supplied by Sigma-Aldrich company. The polymer concentration was chosen to be 0.4 % according to the outcomes of a previous study <cit.> and Lenormand phase diagram to avoid any instabilities associated with capillary and viscous fingerings <cit.>. The viscosity is examined using a DHR-1 rheometer provided by TA Instruments following the previous study <cit.>. A required amount of fluid was loaded into the cone-and-plate geometry to fill the gap. We recorded the shear rate as a function of time corresponding to the given stress. The rheometer obtained each value when the shear stress change was less than a certain tolerance. Each shear rate was studied for 120 seconds from 0.1 to 100 1/s, and the number of experiments was triplicated. We plotted the shear viscosity (μ) versus shear rate (γ̇) in Figure <ref>, where the XG polymer solution behaved as a shear-thinning, non-Newtonian fluid.
The results are well fitted by Power-law model which is expressed as,
μ (γ̇) = kγ̇^(n-1)
where k (kg/m·s) and n (-) are the consistency and the flow indexes, respectively. We estimated the fitting parameters by nonlinear regression of the Matlab curve fitting toolbox.
The corresponding fitting parameters with the coefficient of determination is listed in Table <ref>. This model with parameters is used to model the polymer solution flow in a porous medium presented in the following sections.
§.§ Experimental setup
ACA-700 aging cell apparatus provided by VINCI technologies (see Figure <ref>) is used to conduct core flooding experiments. It consists of a core holder, two accumulators (for injection fluid), a pump, pressure transmitters, and pressure regulators to set confining and back pressures.
§.§ Experimental procedure
The formation brine was introduced horizontally into the core holder with a 3 mL/min flow rate to flush the core and measure the permeability. The permeability was calculated using Darcy's law, where we used values of the estimated pressure change along the core and the given flow rates.
After, we injected the polymer solution into the core horizontally. The pressure difference of the fluid flow in the column was examined by gauging the pressure drop with pressure transmitters. The effluent was collected in graduated cylinders to assess sand production. Several graduated cylinders were used during the polymer injection. After every one PV of the polymer solution was injected, the effluent volume was collected in different graduated cylinders. As a result, the sand recovery was estimated using the effluent mass and volume difference. The radial confining pressure was set to 500 psi during all the experimental procedures conducted at 25°C in a temperature-controlled laboratory.
§ NUMERICAL SIMULATION
§.§ Model of the solid phase
The DEM, initially introduced by <cit.>, is used to simulate the solid phase of the sand production model by solving Newton’s second law to describe the motion of individual particles and their trajectories. The particles in the systems accelerate due to inter-particle collisions, gravity, and fluid-particle interactions. The governing equations for the translation and rotation of particles are expressed as follows:
m_id v_i/dt = f_pf,i +∑_j=1^k_c( f_c,ij + f_damp,ij) +m_i g
I_id ω_i/dt = ∑_j=1^k_cT_ij
where v_i and ω_i are the particle translational and angular velocities, m_i and I_i are the particle mass and inertia, f_pf,i is the particle-fluid interaction force, f_c,ij and f_damp,ij are the contact and viscous damping forces between interacting particles, m_i g is the gravitational force, T_ij is the torque acting between particles i and j and k_c is the number of interacting particles. The particle motion is influenced by the impact from fluid (in the presence of fluid), interaction with other particles and gravity. The particle-fluid interaction force f_pf,i is the sum of all forces acting on a particle by fluid:
f_pf, i = f_d, i + f_∇ p, i + f_∇·τ, i + f_Ar, i
where f_d,i is the drag force, f_∇ p,i is the pressure gradient force, f_∇·τ,i is the viscous force and f_Ar, i is the Archimedes force.
The particle-particle interaction forces are calculated using the linear spring-dashpot-slider model <cit.> which describes the contact force f_c,ij:
f_c, ij^(n) = - k_nδ_n, ij - η_n v_n, ij
f_c, ij^(t) = - min (μf_c, ij^(n), k_tδ_t, ij + η_t v_t, ij)
where the superscripts n and t refer to normal and tangential, k is the spring stiffness constant, δ_ij is the overlap between the particles in contact, v_ij is their relative velocity, μ is the slider friction coefficient, and η is the dashpot damping coefficient.
The JKR contact model describes the adhesion and deformation behavior between cohesively bonded particles <cit.>. It is represented in terms of contact force in normal direction:
f^(n)_JKR = 4E^*a^3/3R^*-√(16πγ E^*a^3)
where E^* = (1 - ν^2_1/E_1 +1 - ν^2_2/E_2)^-1 is the effective Young's modulus, where E_1, E_2 are the Young's modulus and ν_1, ν_2 are the Poisson's ratios of the particles, γ is the surface energy density, a is the radius of the contacting surface, and R^* = (R_iR_j/R_i+R_j) is the effective radius, where R_i and R_j are the particle radii that are in contact.
The JKR model is based on the well-known Hertz <cit.> and Mindlin <cit.> contact model and characterizes the adhesive bonding behavior of the particles. In equation <ref>, while the first term represents the Hertz force, which is k_nδ_n = 4E^*a^3/3R^*, and the second term is the adhesion force.The adhesive force acting in the contact area leads to deformation of the contact surface. Therefore, the contact area in the JKR model usually stretches more extensively than the Hertz and Mindlin model. This is because the JKR model considers the surface energy density and interfacial adhesion characteristics of the particle surfaces, which affect the contact area and deformation. Therefore, the JKR model is capable of simulating large deformations with full surface contact where the overlap exceeds the particle radius.
§.§ CFD-DEM coupling
<cit.> proposed the model A approach of the CFD-DEM coupling, in which the fluid invades only the porous regions (α_f) of the material. The fluid phase is described using the locally averaged Navier-Stokes equations, where the forces acting on a particle by the fluid are shared between solid and fluid phases:
{[ ∂α_f/∂ t + ∇· (α_f u)= 0; ; ∂ (ρ_f α_f u)/∂ t + ∇·( ρ_f α_f uu)= -α_f ∇ p +α_f ∇·τ + ρ_f α_f g + F^A_pf; ].
where ρ_f is the fluid density, α_f is the volume fraction occupied by the fluid, u is the velocity, p is the fluid pressure. The stress tensor is given by
τ=μ( (∇u)+(∇u)^T), where μ is the fluid dynamic viscosity and F_pf^A=1Δ V∑_i=1^n( f_d,i + f^”_i ) is the volumetric particle-fluid interaction force, where f^”_i is the sum of forces other than drag, pressure gradient and viscous forces, Δ V is the volume of a fluid cell.
The interaction force between a particle and fluid is an essential part of the CFD-DEM coupling, since it has a considerable impact on particle motion, which in turn can influence the fluid flow behavior. The pressure gradient force is given by f_∇ p,i = - ∇ p · V_p,i and viscous force is f_∇·τ,i = - (∇·τ) V_p,i, where V_p,i is the volume of a single particle. The particle drag force depends on the particle size and shape, fluid properties, relative velocity between particle and fluid, and drag coefficient. The general form of the drag force is expressed as follows:
f_d,i = π d^2_p,i (1/8ρ_f |u_i-v_i| (u_i -v_i) )C_d,i
where d_p, i is the particle diameter, u_i and v_i are the fluid and particle velocities and C_d, i is the drag coefficient.
The drag coefficient is a function of particle Reynolds number, that can be characterized by the fluid properties and its viscosity. For a creeping flow of a power-law fluid, the drag coefficient of a spherical particle is expressed taking into account the rheology of a non-Newtonian fluid <cit.>:
C_d, i = 24 χ(n)/Re_p, i
where Re_p, i is a particle Reynolds number, n is power-law index and χ (n) porosity correction factor <cit.>:
χ (n) = 6^(n-1)/2(3/n^2 + n + 1)^n+1
The particle Reynolds number is given as follows <cit.>:
Re_p, i = ρ_f d_p, i^n |u_i-v_i|^2-n/k
where k is a the consistency index.
§.§ Numerical setup of the simulation
In the numerical model, we reproduce the laboratory experiment (see Figure <ref>), in which the sample is in 120 mm length and 38.1 mm diameter cylinder form experiencing the radial confining stress of 500 psi (Figure <ref>). The core used in the laboratory experiments contains approximately a billion small particles, which is costly for modeling as it requires a longer simulation runtime.Therefore, we use a small representative in the numerical model to reduce the computing power. Due to the software limitations, we create the numerical sample as a cuboid, with dimensions of 17 mm in length and height and 8.8 mm in width. The outlet hole has a diameter of 3.175 mm, as in the experiment. We assume that the stress applied on the experimental sample is equal to the stress exerted on the side planes of the numerical sample. To ensure consistency between the numerical simulation and laboratory experiment, we locate the numerical sample so that its outlet hole corresponds to the experimental one regarding both position and dimensions.
The numerical simulation consists of separate DEM and CFD systems. The DEM system domain is constructed as a cube with six planes on all sides. These planes are servo wall type and contain and compress the particles in the domain. Initially, the particles are generated in the domain with specific predefined material parameters, dimensions, and geometry. At this stage, the Hertz contact model <cit.> eliminates any cohesion effect between particles. All particles in the system have a spherical shape with different PSD. Due to the small size and complexity of the particles, the simulation with the current domain size is expected to be computationally expensive. Therefore, we use a coarse-graining method to accelerate the simulation. Specifically, this coarse-graining method was developed for the modified JKR model in a dense particle medium system with polydisperse particles <cit.>. Moreover, it has been successfully validated with the same PSD and material parameters used in this work. The material parameters and PSD used in this simulation are provided in Table <ref> and Table <ref>, respectively.
As the particles are generated, the numerical sample is compacted to achieve the desired shape and porosity, which are consistent with the experiment. The porosity of the numerical sample is equal to 40 %. The next step includes the confinement process, in which the sample is compressed with confining stress of 500 psi. It should be noted that the back and front planes are in a fixed position, and the sample is confined by only side planes. In this stage, we use the modification of the JKR contact model <cit.> instead of the Hertz model <cit.> to initiate the cementation process of the numerical sample and examine the bonding behavior of the particles. In the modified JKR model, the surface energy density determines how strongly the particles are bonded between each other and represent the cementation level of the sample in the experiment. We determine the bonding behavior by the total bonds in the sample.
Figure <ref> shows the average confining stress and total bond number in the sample during the confinement process. t_d, c is the dimensionless time that expresses the duration of the confinement process. We observe that the total number of bonded particles decreases as the stress applied to the sample increases. During the elastic deformation, which lasts from t_d, c = 0 to about t_d, c = 0.2, the number of bonds reduces moderately from 1.81 · 10^5 to 1.79 · 10^5. At the yield point (t_d, c = 0.2), there is an intensive bond breakage, which results in a dramatic decrease in bond number from 1.79 · 10^5 to 1.75 · 10^5 during the short period. Then, we experience the gradual breakage of bonds for almost half of the simulation time, in which the total number of bonds reaches the value of 1.715 · 10^5 at about t_d, c = 0.8. As the confining stress becomes stable, the number of bonds remains unchanged.
Figure <ref> demonstrates the snapshots of the confining process at different times. The color bar shows the bond number per particle, in which red is a particle with a maximum number of bonds and blue is an unbonded particle. Due to the compression, the bond breakage occurs near the outlet hole, which is then produced. We notice the high magnification of the area of broken particles with increasing stress.
After some time, as the confining stress becomes stable the production of sand stops. Then, we the coupling process of DEM and CFD to model the sand production by polymer flooding. The DEM geometry is fixed at this stage, and CFD geometry is built to correspond to the particle sample with the same dimensions. Therefore, the position and dimensions of the CFD geometry are the same as in the DEM part. Figure <ref> shows the meshing and boundary conditions of the CFD model. In Figure <ref>a, to handle the unresolved case <cit.>, the CFD domain is segmented into a grid of 12x6x12 cells along the x, y, and z axes accommodating the multiple particles within the cell. The unresolved case is described below by the following equation:
Δ h/d̅_̅p̅ > 3
where Δ h is the size of a single fluid cell and d̅_̅p̅ is the particle average diameter.
The size of the simulation time-step significantly impacts the stability and accuracy of the numerical modeling. Therefore, selecting the optimal time-step size is essential to obtain reliable and precise results. Smaller time-steps typically produce more precise results but also result in longer simulation runtimes. On the other hand, using a longer time-step may lead to unpredictable and unrealistic results. The Rayleigh time-step <cit.> is one of the common methods for selecting the most appropriate time-step size for DEM that ensures accurate and stable simulations. The Rayleigh critical time-step is expressed as follows:
Δ t_c = πR̅/β√(ρ_p/G)
where R̅ is the average radius of the particle, ρ_p is the density of the particles, G = E/2(1-ν) is the shear modulus of the particle, where ν is the Poisson's ratio, and β = 0.8766 + 0.163 ν.
In the CFD domain, the stability of the system is determined by the flow propagation time across the single fluid cell. Physically, the flow time through the cell should be at most one time step for correct operation. We use the Courant-Friedrichs-Lewy (CFL) condition <cit.> to calculate the critical time-step for CFD as follows:
C=U Δ t_CFD/Δ h<C_max
where C is the Courant number, U is the fluid flow velocity and Δ t_CFD is the time-step of the CFD simulation. The value of C_max depends on the solver time-integration scheme. Typically, it should be much less than 1 to maintain an ideal stable system.
The DEM and CFD time-steps in the coupling simulations may operate in consecutive or concurrent regimes <cit.>. Data interchange between the DEM and CFD takes place at the same core sequentially in the consecutive regime. In this situation, it is best to use resources efficiently by allowing all cores to be active at all times. In the concurrent regime, the DEM and CFD coupling computations occur parallel at the same time-step using separate cores. The time-step difference between the two unique models affects how stable CFD-DEM coupling simulations operate. By choosing the appropriate CFD and DEM time-steps, one may manage the simulation duration and enhance the precision of the results. The DEM time-step is often much smaller than the CFD time-step. For every CFD time-step, there might be several numbers of DEM time-steps. In this simulation, we choose the case in which the DEM time-step of Δ t_DEM = 5 · 10^-8 s is 100 times smaller than the CFD time-step, which is equal to Δ t_CFD = 5 · 10^-6 s. These time-step options are adapted from the work of <cit.>, where they successfully found and proved the optimal time-step selection for CFD and DEM by investigating the sand production in the Kazakhstan oilfield reservoir. According to equation <ref>, the DEM time-step size, which is 4.26 % of the Rayleigh critical time-step, is within the acceptance range. The CFD time-step is also satisfied the CFL condition (equation <ref>) with Courant number of 5.38 · 10^-7, which is substantially smaller than 1.
In the CFD part, Xanthan Gum (XG) polymer solution, selected as an injection fluid, is characterized as a non-Newtonian fluid. In the simulation, we use the power-law parameters from Table <ref>, obtained by investigating its rheology. Figure <ref>b shows the boundary conditions of the CFD. The side planes to which the confining stress is applied are assigned in periodic boundary conditions depicted in green. The front plane is in "no slip", represented by the brown color, while the outlet hole corresponds to an atmospheric pressure of P = 0 (red color). We inject the fluid into the domain from the backplane, opposite the front plane. The injection velocity of the fluid is U = 1.55 · 10^-4 m/s, which corresponds to the laboratory experiment. The initial conditions are U(0) = 0 and P(0) = 0. The rectangular outlet hole in the CFD is resolved into smaller cells to match the dimensions of the circular hole in the DEM.
§ RESULTS AND DISCUSSION
§.§ Experimental results
§.§.§ Brine injection
Figure <ref> shows the pressure gradient versus injected pore volume of brine at different flow rates. We observed a general trend of increasing pressure gradient with flow rate. Notable fluctuations in the pressure gradient were seen at flow rates of 1 and 2 mL/min and this is because the pressure sensors were not accurate enough at low flow rates. Nevertheless, the pressure transmitters showed steady results of 6.51 · 10^5 Pa/m from 13 to 40 PV of injection at a flow rate of 3 mL/min. At a flow rate of 4 mL/min, the pressure gradient increases up to 8.14 · 10^5 Pa/m, and a value of 13.00 · 10^5 Pa/m is observed at the flow rate of 5 mL/min. The impact of brine injection on sand production was also studied from the effluent at the outlet. However, sand production was not observed at all flow rates.
§.§.§ Impact of polymer flooding on sand production
After the brine injection test, and determining the permeability of the core (see Table <ref>), the XG polymer solution is injected horizontally. The flow rates below 3 mL/min were not studied because of the instability of pressure sensors at lower flow rates. As mentioned before, the XG solution is a non-Newtonian shear-thinning fluid whose rheogram is presented in Figure <ref>.
Figure <ref> presents the evolution of the pressure gradient per pore volume of polymer injected (in PV) at 3 and 5 mL/min flow rates. The breakthrough of the polymer solution occurred around 0.82 PV for both flow rates. We noticed that the pressure gradient is the same, around 2.00 · 10^7 Pa/m, despite the change in flow rate from 3 to 5 mL/min. This phenomenon can be explained by the non-Newtonian behavior of the polymer solution because the viscosity of the XG solution decreases by increasing the flow velocity, i.e., the shear rate (see Figure <ref>).
To observe the impact of polymer on the pressure gradient, we plotted the pressure gradient against PV data of the brine and XG polymer solution in Figure <ref>. The polymer solution increased the pressure gradient by a factor of 30 compared to the brine injection. Assuming that the equivalent shear rate is at the maximum value of 100 1/s, the viscosity is roughly 0.08 Pa.s, which is 80 times more viscous than water. However, note that the XG solution viscosity field distribution across the core will differ since the shear rate distribution can vary across the core sample due to the pore radii.
As was mentioned in the previous section, there was no sand production during the brine injection, even at the flow rate of 5 mL/min. However, we noticed sand production for the polymer injection at a flow rate of 3 mL/min, plotted in Figure <ref>. Due to the higher viscosity (i.e., viscous forces), the XG polymer solution mobilized the sand particles. Moreover, a solid shear-thinning behavior of XG solution may enhance the mobilization of poorly consolidated sand particles.
As a result, a total of 0.359 g of sand was produced after 5 PV of injection of the polymer solution. 44.5% of total sand is produced after 1 PV of polymer injection, and sand production decreases with injected PV.Based on the trend equation on the results shown in the figure, the cumulative sand production equals 0.15PV^0.47.
§.§ Numerical results
Initially, the numerical model is verified with the experimental data by comparing the dimensionless cumulative sand production rate along the dimensionless time, which is given as follows:
t_d = (t/t_end)
where t is the current time, t_end is the transient phase starting time when the cessation of sand production occurs. We define the dimensionless cumulative sand production rate using the following expression:
M^t_d = (∫_t=0^tM_tdt/∫_t=0^t_endM_tdt)
where M_t is the cumulative mass of produced sand at time t.
Figure <ref> compares the dimensionless cumulative mass of produced sand of the numerical model and experimental data at time t_d. The red curve, which results from numerical simulation, shows only the sand production due to fluid injection. The mass of the sand produced during the confinement process is not considered to examine only the polymer effect on sand production. Although the experimental results contain only six data, it clearly reflects the general pattern of sand production. From the last two points, it can be concluded that sand production during this period remains unchanged and acquires a transitional character. We observe that the numerical model results are in relatively good agreement with the experimental data and demonstrate a similar pattern of the curve in dimensionless cumulative sand production rate.
To better examine the accuracy between the numerical model and experimental data, the root mean squared relative error (RMSRE) is calculated using the following equation:
RMSRE =√(1/n∑_i=1^n| M^i_d, exp - M^i_d,model/M^i_d, exp)|^2
where M_d,exp^i is the i-th dimensionless cumulative produced sand mass in the experiment and M_d,model^i is the i-th dimensionless cumulative produced sand mass in the model. In accordance with equation (<ref>), the RMSRE value for the numerical model is 0.137.
Figure <ref> represents a series of snapshots of the sample's fluid velocity, fluid streamline, and particle velocities. The capture time of the snapshots corresponds to the 1^st, 3^rd and 6^th points of the experimental data, which are t_d = 0.17, t_d = 0.5, and t_d = 1 in dimensionless time, respectively. These snapshots provide clear insights into fluid flow behavior and particle movement at different sand production phases. For example, at the beginning of injection, the fluid has a higher velocity at the center near the outlet hole with a non-uniform streamlines distribution, leading to a greater velocity of sand particles. Therefore, we observe a higher rate of sand production due to the intensive movement of sand toward the outlet. However, as time progresses, fluid and particle velocity decreases, resulting in reduced sand production. At the final time, the velocity of the fluid substantially decreases with uniformly distributed streamlines. The sand production enters the transient regime, and no further production occurs.
The fluid flow velocity in the sample mainly influences the fluid's viscosity. Figure <ref> demonstrates the change in apparent fluid viscosity in the sample at different periods. During the intensive sand production (t_d = 0.17) and at the midpoint ( t_d = 0.5), the fluid exhibits lower viscosity at the whole sample. As flow is uniformly distributed throughout the sample, the fluid becomes more viscous in more tightly bonded regions where the flow velocity is less. On the other hand, in areas with unbonded particles, especially near the outlet hole, the fluid viscosity decreases.
Figure <ref> demonstrates the total number of bonds and contacts between particles in the sample. We see that the injected polymer also affects bond breakage. The fluid not only flushes out the already broken particles due to the confinement process but also causes the breakage of bonds with their consecutive production. During the confinement process, as shown in Figure <ref>, the bond breakage occurs with some fluctuations due to the applied confining stress and characteristics of the sample material. On the other hand, the injection of a polymer leads to a stable bond breakage with a gradual decrease in the total number of bonds. As the sand production becomes transient, the total number of bonds reduces by 0.8 · 10^5 bonds from the initial injection state, reaching the 1.645 · 10^5 bonds in the sample. During the confinement process, the reduction of the bonds is equal to 0.9 · 10^5, which is slightly higher than in polymer injection. These findings indicate that the polymer injection demonstrates relatively similar results to the confinement process regarding bond breakage.
The blue curve in Figure <ref> displays the total number of contacts in the sample during the polymer injection. Initially, the contacts decrease sharply from 1.99 · 10^5 to 1.97 · 10^5 and demonstrate a similar curve pattern to the curve of the bond number. This implies that the polymer with a higher initial velocity effectively segregates all unbonded particles. However, as the fluid becomes steady, the contacts reduce less compared to the bonds, indicating that some particles remain in contact even though they do not adhesively interact with each other. After the gradual reduction, both the bond and contact number reach the constant value at approximately t_d = 0.8 dimensionless time, which shows the cessation of sand production.
Figure <ref> shows the snapshots of the bond number per particle during the polymer injection. The snapshots are captured at t_d = 0, t_d = 0.17, t_d = 0.5 and t_d = 1. While there is a slight increase in the breakage area between t_d = 0 and t_d = 0.17, we do not see significant differences from t_d = 0.17 to t_d = 1 since the bonds reduce gradually at this period. Moreover, it should be considered that some of the broken particles leave the sample through the hole as time progresses. Therefore, this effect contributes to the similar area of broken particles near the hole.
Since the sample contains particles of different sizes, we compare the size distribution of the produced particles with the initial PSD of the sample. Figure <ref> compares the mass ratio of each particle size of the initial and produced particles. The final PSD of the produced particles is measured when the sand production becomes in the transient phase. We notice that the mass of the small particles is less differentiated than that of the large ones. For example, the first four sizes of particles (0.3 mm, 0.36 mm, 0.4 mm, and 0.44 mm), which are smaller in size, have approximately similar mass ratios of initial and produced particles. While the mass ratio of 0.5 and 0.55 mm produced particles is more than that in initial particles, the mass ratio of 0.6 and 0.71 mm produced particles is less than the mass ratio of the identical size particles in the sample before fluid injection. These findings suggest that the significant source of sand production can be attributed to particles of 0.5 mm and 0.55 mm.
§ CONCLUSIONS
The primary aim of this study is to experimentally and numerically investigate the effect of polymer injection on sand production in poorly consolidated sandstone. The artificially made sandstone is prepared based on the PSD of the Kazakhstan oil field. The production of sand from this sandstone is studied by injecting the polymer solution whose viscosity has been characterized using rotational rheometry. In the numerical model, the modified cohesive contact model is used to model the cementation behavior of the sample. The power-law model describes the non-Newtonian fluid flow. The fluid-particle interaction is characterized by solving the drag force based on the power-law model in the CFD and DEM coupling.
Experimentally no sand was produced by injecting brine solution, even at high flow rates at a constant radial confining pressure of 500 psi. In contrast, we observed the impact of the polymer solution injection on the production of sand. Moreover, 44.5% of produced sand was obtained after 1 PV of polymer solution injection, i.e., breakthrough. The power-law model well represents the Xanthan gum solution, and the fitted parameters were used as input for numerical investigation.
The numerical model is verified with the results of the laboratory experiment. The findings suggest that the numerical model is consistent with the experimental data demonstrating a similar curve pattern in the dimensionless cumulative sand production rate by fluid injection. To examine the sanding behavior by polymer flooding, we compare different results such as fluid and particle velocities, fluid viscosity, bonding behavior of sand particles, and PSD of produced particles at different time intervals. We observe that the polymer injection has a significant effect on bond breakage. While the bonded particles break in an unstable manner during the confinement process, the unbonding of the particles occurs gradually during the fluid injection. Therefore, sand production happens not only from already broken particles due to the confinement but also due to polymer injection. At the beginning of the injection, we noticed an intensive sand production rate caused by the higher velocity of the fluid. The sand production rate enters the transient phase as the fluid flow becomes stationary. The fluid has a lower viscosity in the higher permeable zones caused by dramatic unbonding. On the other hand, the fluid viscosity is higher in regions with an abundance of bonded particles. The mass ratio difference between produced particles and the initial sample is much more significant for larger particles. The fine particles have approximately the same mass ratios. Almost 60 % of the produced sand consists of medium-sized particles with a diameter of 0.5 mm and 0.55 mm.
This knowledge can guide the study of sand production by polymers, i.e., non-Newtonian fluids in porous media, especially for poorly consolidated porous media. We expect our study to be a starting point for further research on sand production by non-Newtonian fluids in poorly consolidated porous media.
plainnat
|
http://arxiv.org/abs/2307.01674v1
|
20230704120650
|
Inductive graded rings, hyperfields and quadratic forms
|
[
"Kaique Matias de Andrade Roberto",
"Hugo Luiz Mariano"
] |
math.KT
|
[
"math.KT",
"math.AC"
] |
Inductive graded rings, hyperfields and quadratic forms
Kaique Matias de Andrade RobertoInstitute of Mathematics and Statistics, University of São Paulo, Brazil. Emails: [email protected], [email protected] Hugo Luiz MarianoThe authors wants to express their gratitude to
Coordenaç ão de Aperfeiçoamento de Pessoal de Nível Superior (Capes -Brazil) by the financial support to develop this work. The second author: program Capes-Print number 88887.694866/2022-00.
=====================================================================================================================================================================================================================================================================================================================================================================================================================================
The goal of this work is twofold: (i) to provide a detailed analysis of some categories of inductive graded ring - a concept introduced in <cit.> in order to provide a solution of Marshall's signature conjecture in the algebraic theory of quadratic forms; (ii) apply this analysis to deepen the connections between the category of special hyperfields (<cit.>) - equivalent to the category of special groups (<cit.>) and the categories of inductive graded rings.
§ INTRODUCTION
It can be said that the Algebraic Theory of Quadratic Forms (ATQF) was founded in 1937 by E. Witt, with the introduction of the concept of the Witt ring of a given field, constructed from the quadratic forms with coefficients in the field: given F, an arbitrary field of characteristic ≠ 2, W(F), the Witt ring of F, classifies the quadratic forms over F that are regular and anisotropic, being in one-to-one correspondence with them; thus the focus of the theory is the quadratic forms defined on the ground field where all their coefficients are invertible. In this way, the set of orders in F is in one-to-one correspondence with the set of
minimal prime ideals of the Witt ring of F, and more, the set of orders in F provided with the Harrison's topology is a Boolean topological space that, by the bijection above, is identified with a subspace of the Zariski spectrum of the Witt ring of F.
Questions about the structure of Witt rings W(F) could only be solved about three decades after Witt's original idea, through the introduction and analysis of concept of Pfister form.
The Pfister forms of degree n ∈ℕ, in turn, are generators of the power I^n(F) of the fundamental ideal I(F)⊆ W(F) (the ideal determined by the anisotropic forms of even dimension).
Other finer questions about the powers of the fundamental ideal arose in the early 1970s: J. Milnor, in a seminal article from 1970 (<cit.>), determines a graduated ring k_∗(F) (from K-theory, reduced mod 2) associated with the field F, which interpolates, through graded ring morphisms
h_∗(F) : k_∗(F)⟶ H^∗(F)s_∗(F) :
k_∗(F)⟶ W_∗(F),
the graded Witt ring
W_∗(F) := ⊕_n ∈ℕ I^n(F)/I^n+1(F)
and the graded cohomology ring
H^∗(F) := ⊕_n ∈ℕ H^n(Gal(F^s|F), {± 1}).
From Voevodski's proof of Milnor's conjectures, and the development of special groups theory
(SG) – an abstract (and first-order) theory of ATQF, introduced by M. Dickmann, and developed by him in partnership with F. Miraglia
since the 1990s – it has been possible to demonstrate conjectures about signatures put by M. Marshall by T. Lam in the mid-1970s (<cit.>, <cit.>,<cit.>).
The SG theory, which faithfully codifies both the classical theory of quadratic forms over fields and the reduced theory of quadratic forms developed from the 1980s (<cit.>), allows us to naturally extend the construction of graded ring functors to all the special groups G: W(G), W_*(G), k_*(G) (<cit.>, <cit.>).
The key points in the demonstration of these conjectures for (pre-ordered) fields was a combination of methods: (i) the introduction of Boolean methods in the theory of quadratic forms through the SG theory -especially the Boolean hull functor (<cit.>, <cit.>); (ii) the encoding of the original problems posed on signatures in questions on graded Witt rings; (iii) the use of Milnor's isomorphisms to transpose these questions to the graded ring of k-theory and the graded ring of cohomology; (iv) the use of Galois cohomology methods to finalize the resolution of the encoded problem.
In <cit.> we developed a k-theory for the category of hyperbolic hyperfields (a category that contains a copy of the category of (pre)special groups): this construction extends, simultaneously, Milnor's k-theory (<cit.>) and Dickmann-Miraglia's k-theory (<cit.>). An abstract environment that encapsulate all them, and of course, provide an axiomatic approach to guide new extensions of the concept of K-theory in the context of the algebraic and abstract theories of quadratic forms is given by the concept of inductive graded rings a concept introduced in <cit.> in order to provide a solution of Marshall's signature conjecture in realm the algebraic theory of quadratic forms for Pythagorean fields.
The goal of this work is twofold: (i) to provide a detailed analysis of some categories of inductive graded ring; (ii) apply this analysis to deepen the connections between the category of special hyperfields (<cit.>) - equivalent t groups (<cit.>) and the categories of inductive graded rings.
Outline of the work:...
We assume that the reader is familiar with some categorical results concerning adjunctions: mostly are based on <cit.>, but the reader could also consult <cit.>.
§ PRELIMINARIES: SPECIAL GROUPS, HYPERBOLIC HYPERFIELDS AND K-THEORY
§.§ Special Groups
Firstly, we make a brief summary on special groups. Let A be a set and ≡ a binary relation on A× A. We extend ≡ to a binary relation ≡_n on A^n, by induction on n≥1, as follows:
* ≡_1 is the diagonal relation Δ_A ⊆ A × A.
* ≡_2=≡.
* If n ≥ 3, ⟨ a_1,...,a_n⟩≡_n⟨ b_1,...,b_n⟩ if and only there are x,y,z_3,...,z_n∈ A such that
⟨ a_1,x⟩≡⟨ b_1,y⟩, ⟨ a_2,...,a_n⟩≡_n-1⟨ x,z_3,...,z_n⟩⟨ b_2,...,b_n⟩≡_n-1⟨ y,z_3,...,z_n⟩.
Whenever clear from the context, we frequently abuse notation and indicate the afore-described extension ≡ by the same symbol.
A special group is a tuple (G,-1,≡), where G is a group of exponent 2,
i.e, g^2=1 for all g∈ G; -1 is a distinguished element of G, and ≡⊆ G×
G× G× G is a relation (the special relation), satisfying the following axioms for all
a,b,c,d,x∈ G:
SG 0 ≡ is an equivalence relation on G^2;
SG 1 ⟨ a,b⟩≡⟨ b,a⟩;
SG 2 ⟨ a,-a⟩≡⟨1,-1⟩;
SG 3 ⟨ a,b⟩≡⟨ c,d⟩⇒ ab=cd;
SG 4 ⟨ a,b⟩≡⟨ c,d⟩⇒⟨
a,-c⟩≡⟨-b,d⟩;
SG 5
⟨ a,b⟩≡⟨ c,d⟩⇒⟨ ga,gb⟩≡⟨
gc,gd⟩, g∈ G.
SG 6 (3-transitivity) the extension of ≡ for a binary relation on G^3 is a
transitive relation.
A group of exponent 2, with a distinguished element -1, satisfying the axioms SG0-SG3 and SG5 is called a proto special group; a pre special group is a proto special group that also satisfy SG4. Thus a special group is a pre-special group that satisfies SG6 (or, equivalently, for each n ≥ 1, ≡_n is an equivalence relation on G^n).
A n-form (or form of dimension n≥1) is an n-tuple of elements of a pre-SG G. An element b∈ G is represented on G by the form φ=⟨ a_1,...,a_n⟩, in symbols b∈ D_G(φ), if there exists b_2,...,b_n∈ G such that ⟨ b,b_2,...,b_n⟩≡φ.
A pre-special group (or special group)
(G,-1,≡) is:
∙ formally real if -1 ∉⋃_n ∈ℕ D_G( n⟨ 1 ⟩)[Here the notation n⟨ 1 ⟩ means the form ⟨ a_1,...,a_n⟩ where a_j=1 for all j=1,...,n. In other words, n⟨ 1 ⟩ is the form ⟨ 1 ,...,1⟩ with n entries equal to 1.] ;
∙ reduced if it is formally real and, for each a ∈ G, a ∈ D_G(⟨ 1, 1 ⟩) iff a =1.
A map (G,≡_G,-1)[r]^f (H,≡_H,-1) between pre-special groups is a morphism
of pre-special groups or PSG-morphism if f:G→ H is a homomorphism of groups, f(-1)=-1 and for all
a,b,c,d∈ G
⟨ a,b⟩≡_G⟨ c,d⟩⇒⟨ f(a),f(b)⟩≡_H⟨ f(c),f(d)⟩
A morphism of special groups or SG-morphism is a pSG-morphism between the corresponding pre-special groups. f
will be an isomorphism if is bijective and f,f^-1 are PSG-morphisms.
It can be verified that a special group G is formally real iff it admits some SG-morphism f : G → 2. The category of special groups (respectively reduced special groups) and their morphisms will be denoted by 𝒮𝒢
(respectively ℛ𝒮𝒢).
¨
* A reduced special group is [MC] if for all n≥1 and all forms φ over G,
σ∈ X_G,σ(φ)≡0 2^nφ∈ I^nG.
* A reduced special group is [SMC] if for all n≥1, the multiplication by λ(-1) is an injection of k_nG in k_n+1G.
§.§ Multifields/Hyperfields
Roughly speaking, a multiring is a “ring” with a multivalued addition, a notion introduced in the 1950s by Krasner's works. The notion of
multiring was joined to the quadratic forms tools by the hands of M. Marshall in the last decade (<cit.>). We gather the basic information about multirings/hyperfields and expand some details that we use in the context of K-theories. For more detailed calculations involving multirings/hyperfields and quadratic forms we indicate to the reader the reference <cit.> (or even <cit.> and <cit.>). Of course, multi-structures is an entire subject of research (which escapes from the "quadratic context"), and in this sense, we indicate the references <cit.>, <cit.>, <cit.>.
A multigroup is a quadruple (G,∗,r,1), where G is a non-empty set, ∗:G× G→𝒫(G)∖{∅} and r:G→ G
are functions, and 1 is an element of G satisfying:
* If z∈ x∗ y then x∈ z∗ r(y) and y∈ r(x)∗ z.
* y∈ 1∗ x if and only if x=y.
* With the convention x∗(y∗ z)=⋃_w∈ y∗ zx∗ w and
(x∗ y)∗ z=⋃_t∈ x∗ yt∗ z,
x∗(y∗ z)=(x∗ y)∗ zx,y,z∈ G.
A multigroup is said to be commutative if
* x∗ y=y∗ x for all x,y∈ G.
Observe that by (i) and (ii), 1∗ x=x∗ 1={x} for all x∈ G. When a∗ b={x} be a unitary set, we just write
a∗ b=x.
A multiring is a sextuple (R,+,·,-,0,1) where R is a non-empty set, +:R×
R→𝒫(R)∖{∅},
·:R× R→ R
and -:R→ R are functions, 0 and 1 are elements of R satisfying:
* (R,+,-,0) is a commutative multigroup;
* (R,·,1) is a commutative monoid;
* a.0=0 for all a∈ R;
* If c∈ a+b, then c.d∈ a.d+b.d. Or equivalently, (a+b).d⊆ a.d+b.d.
Note that if a ∈ R, then 0 = 0.a ∈ (1+ (-1)).a ⊆ 1.a + (-1).a, thus (-1). a = -a.
R is said to be an hyperring if for a,b,c ∈ R, a(b+c) = ab + ac.
A multring (respectively, a hyperring) R is said to be a multidomain (hyperdomain) if it has no zero divisors. A multring R will be a
multifield if every non-zero element of R has multiplicative inverse; note that hyperfields and multifields coincide. We will use "hyperfield" since this is the prevailing terminology.
Let A and B multirings. A map f:A→ B is a morphism if for all a,b,c∈ A:
* c∈ a+b⇒ f(c)∈ f(a)+f(b);
* f(-a)=-f(a);
* f(0)=0;
* f(ab)=f(a)f(b);
* f(1)=1.
If A and B are multirings, a morphism f A → B is a strong morphism if for all a,b, c∈ A, if f(c) ∈ f(a) + f(b), then there are a',b',c' ∈ A with f(a') = f(a),f(b') = f(b), f(c') = f(c) such that c' ∈ a' + b'.
In the quadratic context, there is a more detailed analysis in Example 2.10 of <cit.>.
A multiring is a sextuple (R,+,·,-,0,1) where R is a non-empty set, +:R× R→𝒫(R)∖{∅}, ·:R× R→ R and -:R→ R are functions, 0 and 1 are elements of R satisfying:
* (R,+,-,0) is a commutative multigroup;
* (R,·,1) is a monoid;
* a0=0 for all a∈ R;
* If c∈ a+b, then cd∈ ad+bd and dc∈ da+db. Or equivalently, (a+b)d⊆ ab+bd and d(a+b)⊆ da+db.
* If the equalities holds, i.e, (a+b)d=ab+bd and d(a+b)=da+db, we said that R is a hyperringhyperring.
A multiring is commutative if (R,·,1) is a commutative monoid. A zero-divisor of a multiring R is a non-zero element a∈ R such that ab=0 for another non-zero element b∈ R. The multiring R is said to be a multidomain if do not have zero divisors, and R will be a multifield if 10 and every non-zero element of R has multiplicative inverse.
* Suppose that (G,+,0) is an abelian group. Defining a + b = {a + b} and r(g)=-g,
we have that (G,+,r,0) is an abelian multigroup. In this way, every ring, domain and field is a multiring,
multidomain and hyperfield, respectively.
* Q_2={-1,0,1} is hyperfield with the usual product (in ℤ) and the multivalued sum defined by
relations
0+x=x+0=x, x∈ Q_2
1+1=1, (-1)+(-1)=-1
1+(-1)=(-1)+1={-1,0,1}
* Let K={0,1} with the usual product and the sum defined by relations x+0=0+x=x, x∈ K and
1+1={0,1}. This is a hyperfield called Krasner's hyperfield <cit.>.
Now, another example that generalizes Q_2={-1,0,1}. Since this is a new one, we will provide the entire verification that it is a
multiring:
[Kaleidoscope, Example 2.7 in <cit.>]
Let n∈ℕ and define
X_n={-n,...,0,...,n}⊆ℤ.
We define the n-kaleidoscope multiring by
(X_n,+,·,-, 0,1), where - : X_n → X_n is restriction of the opposite map in ℤ, +:X_n×
X_n→𝒫(X_n)∖{∅} is given by the rules:
a+b={a}, b-a|b|≤|a|
{b}, b-a|a|≤|b|
{-a,...,0,...,a}b=-a
,
and ·:X_n× X_n→ X_n is given by the rules:
a· b=(ab)max{|a|,|b|}a,b0
0a=0b=0
.
With the above rules we have that (X_n,+,·, -, 0,1) is a multiring.
Now, another example that generalizes K={0,1}.
[H-hyperfield, Example 2.8 in <cit.>]
Let p≥1 be a prime integer and H_p:={0,1,...,p-1}⊆ℕ. Now, define the binary multioperation and operation in H_p as
follows:
a+b =
H_pa=b, a,b0
{a,b}a b, a,b0
{a}b=0
{b}a=0
a· b =k0≤ k<pk≡ ab.
(H_p,+,·,-, 0,1) is a hyperfield such that for all a∈ H_p, -a=a. In fact, these H_p are a kind of generalization of K, in the sense that H_2=K.
There are many natural constructions on the category of multrings as: products, directed inductive limits, quotients by an ideal,
localizations by multiplicative subsets and quotients by ideals. Now, we present some constructions that will be used further. For the first one, we need to restrict our category:
An hyperbolic multiring is a multiring R such that 1-1=R. The category of hyperbolic multirings and hyperbolic hyperfields will be denoted by ℋℳℛ and ℋℳℱ respectively.
Let F_1 and F_2 be two hyperbolic hyperfields. We define a new hyperbolic hyperfield (F_1×_h F_2,+,-,·,(0,0),(1,1)) by the following: the underlying set of this structure is
F_1×_h F_2:=(Ḟ_1×Ḟ_2)∪{(0,0)}.
For (a,b),(c,d)∈ F_1×_h F_2 we define
-(a,b) =(-a,-b),
(a,b)·(c,d) =(a· c,b· d),
(a,b)+(c,d) ={(e,f)∈ F_1× F_2:e∈ a+cf∈ b+d}∩(F_1×_h F_2).
In other words, (a,b)+(c,d) is defined in order to avoid elements of F_1× F_2 of type (x,0),(0,y), x,y0.
[Product of Hyperbolic Hyperfields]
Let F_1,F_2 be hyperbolic hyperfields and F_1×_h F_2 as above. Then F_1×_h F_2 is a hyperbolic hyperfield and satisfy the Universal Property of product for F_1 and F_2.
We will verify the conditions of definition <ref> (in a very similar manner as in Theorem 3.3 of <cit.>). Note that by the definition of multivalued sum in F_1×_h F_2 we have for all a,c∈ F_1 and all b,d∈ F_2 that (a,b)+(c,d)∅ and (a,c)-(a,c)=F_1×_h F_2 if a,c0.
* In order to prove that (F_1×_h F_2,+,-,(0,0)) is a multigroup we follow the steps below.
* We will prove that if (e,f)∈(a,b)+(c,d), then (a,b)∈(c,d)+ (-e,-f) and (c,d)∈ (-a,-b)+(e,f).
If (a,b)=(0,0) (or (c,d)=(0,0)) or (a,b)=(-c,-d), then (e,f)∈(a,b)+(c,d) means (e,f)=(c,d) or e∈ a-a, f∈ b-b. In both cases we get (a,b)∈(c,d)+(-e,-f) and (c,d)∈ (-a,-b)+(e,f).
Now suppose a,b,c,d0 with a-c, b-d. Let (e,f)∈(a,b)+(c,d). Then (e,f)∈ F_1×_hF_2 with e∈ a+c and f∈ c+d. Moreover a∈ c-e and b∈ d-f with (a,b)∈ F_1×_hF_2, implying (a,b)∈(c,d)+ (-e,-f). We prove (c,d)∈ (-a,-b)+(e,f) by the very same argument.
* Commutativity and ((a,b)∈(c,d)+(0,0))⇔((a=b)∧ (c=d)) are direct consequence of the definition of multivaluated sum.
* Now we prove the associativity, that is,
[(a,b)+(c,d)]+(e,f)=(a,b)+[(c,d)+(e,f)].
In fact (see the remarks after Lemma 2.4 of <cit.>), it is enough to show
[(a,b)+(c,d)]+(e,f)⊆(a,b)+[(c,d)+(e,f)].
If (a,b)=0 or (c,d)=0 or (e,f)=0 we are done. Now let a,b,c,d,e,f0 and (v,w)∈[(a,b)+(c,d)]+(e,f). If (c,d)=-(e,f), we have
(a,b)+[(c,d)+(e,f)]=(a,b)+F_1×_hF_2=F_1×_hF_2⊇ [(a,b)+(c,d)]+(e,f).
If -(e,f)∈(a,b)+(c,d) then -(a,b)∈(c,d)+(e,f) and we have
[(a,b)+(c,d)]+(e,f)=F_1×_hF_2=(a,b)+[(c,d)+(e,f)]
Now suppose a,b,c,d,e,f0, (c,d)-(e,f), -(e,f)∉(a,b)+(c,d). Let (x,y)∈[(a,b)+(c,d)]+(e,f). Then there exists (v,w)∈ F_1×_hF_2 such that (v,w)∈(a,b)+(c,d) and (x,y)∈(v,w)+(e,f). This imply (v∈ a+c)∧(x∈ v+e) and (w∈ b+y)∧(y∈ d+f), so there exists p∈ F_1, q∈ F_2 such that
(v∈ a+p)∧(p∈ c+e) and (w∈ b+q)∧(q∈ d+f). If p,q=0 or p,q0 we have (p,q)∈ F_1×_hF_2, which imply (v,w)∈ [(a,b)+(c,d)]+(e,f). If p=0 and q0 (the case q=0 and p0 is analogous), then v=a and c=-e. Since a,c0 and F_1 is hyperbolic, we have a-a=c-c=F_1. Then (v∈ a-a)∧(-a∈ c-c) and (w∈ b+q)∧(q∈ d+f), with (-a,q)∈ F_1×_hF_2 and again, we get (v,w)∈ [(a,b)+(c,d)]+(e,f).
* Since (F_1×_h F_2,·,(1,1)) is an abelian group, we conclude that (F_1×_h F_2,·,(1,1)) is a commutative monoid. Beyond this, every nonzero element of F_1×_h F_2 has an inverse.
* (a,b)·(0,0)=(0,0) for all (a,b)∈ F_1×_h F_2 is direct from definition.
* For the distributive property, let (a,b),(c,d),(e,f)∈ F_1×_h F_2 and consider (x,y)∈(e,f)[(a,b)+(c,d)]. We need to prove that
*(x,y)∈(e,f)·(a,b)+(e,f)·(c,d).
It is the case if (a,b)=(0,0), or (c,d)=(0,0) or (e,f)=(0,0). Moreover (*) also holds if a,b,c,d,e,f0 and (c,d)=-(a,b).
Now suppose a,b,c,d,e,f0 and (c,d)-(a,b). Then (x,y)=(ev,fw) for some (v,w)∈(a,b)+(c,d). Since (e,f)∈ F_1×_h F_2, e=0 iff f=0, which imply (ev,fw)∈ F_1×_h F_2, with ev∈ ea+ec and fw∈ fb+fd. Therefore (x,y)=(ev,fw)∈(e,f)·(a,b)+(e,f)·(c,d), as desired.
Then (F_1×_h F_2,+,-,·,(0,0),(1,1)) is a hyperbolic hyperfield. Moreover we have projections π_1:F_1×_h F_2→ F_1, π_2:F_1×_h F_2→ F_2 given respectively by the rules π_1(x,y)=x, π_2(x,y)=y.
Finally, suppose that F is another hyperfield with morphisms p_1:F→ F_1, p_2:F→ F_2. Consider (p_1,p_2):F→ F_1×_h F_2 given by the rule
(p_1,p_2)(x)=(p_1(x),p_2(x)). It is immediate that (p_1,p_2) is the unique morphism making the following diagram commute
@!=5.pc F[dr]^p_2[dl]_p_1@.>[d]^(p_1,p_2)
F_1 F_1×_hF_2[r]_p_2[l]^p_1 F_2
so F_1×_h F_2 is the product in the category of hyperbolic hyperfields, completing the proof.
In order to avoid confusion and mistakes, we denote the binary product in ℋℳℱ by F_1×_hF_2. For hyperfields {F_i}_i∈ I, we denote the product of this family by
∏^h_i∈ IF_i,
with underlying set defined by
∏^h_i∈ IF_i:=(∏_i∈ IḞ_i)∪{(0_i)_i∈ I}
and operations defined by rules similar to the ones defined in <ref>. If I={1,...n}, we denote
∏^h_i∈ IF_i=∏^n_i=1
[h]F_i.
Note that if F_1 (or F_2) is not hyperbolic, then F_1×_h F_2 is not a hyperfield in general. Let F_1 be a field (considered as a hyperfield), for example F_1=ℝ and F_2 be another hyperfield. Then if a,b∈ F_1, we have
1-1={0}, so (1,a)+(-1,b)={0}×(a-b), and
[{0}×(a-b)]∩(F_1×_h F_2)=∅.
Let (G,≡,-1) be a special group and define M(G)=G∪{0} where 0:={G}[Here,
the choice of the zero element was ad hoc. Indeed, we can define 0:={x} for any x∉ G.]. Then
(M(G),+,-,·,0,1) is a hyperfield, where
* a· b=0 a=0b=0
a·
b
* -(a)=(-1)· a
* a+b={b} a=0
{a} b=0
M(G) a=-b, a0
D_G(a,b)
The correspondence G↦ M(G) extends to an equivalence of categories M:𝒮𝒢→𝒮ℳℱ, frm the category of specail groups nto the category of special multifields.
A Dickmann-Miraglia multiring (or DM-multiring for short) [The name “Dickmann-Miraglia” is given in honor to professors Maximo Dickmann and Francisco Miraglia, the creators of the special group theory.] is a pair (R,T) such that R is a multiring, T⊆ R is a multiplicative subset of R∖{0}, and (R,T) satisfies the following properties:
DM0 R/_mT is hyperbolic.
DM1 If a0 in R/_mT, then a^2=1 in R/_mT. In other words, for all a∈ R∖{0},
there are r,s∈ T such that ar=s.
DM2 For all a∈ R, (1-a)(1-a)⊆(1-a) in
R/_mT.
DM3 For all a,b,x,y,z∈ R∖{0}, if
a∈x+b
b∈y+zR/_mT,
then exist v∈x+z such that a∈y+v and vb∈xy+az in R/_mT.
If R is a ring, we just say that (R,T) is a DM-ring, or R is a DM-ring. A Dickmann-Miraglia hyperfield (or DM-hyperfield) F is a
hyperfield such that (F,{1}) is a DM-multiring (satisfies DM0-DM3). In other words, F is a DM-hyperfield if F is hyperbolic and for all
a,b,v,x,y,z∈ F^*,
* a^2=1.
* (1-a)(1-a)⊆(1-a).
* a∈ x+b
b∈ y+zv∈ x+za∈ y+vvb∈ xy+az.
[Theorem 3.4 of <cit.>]
Let (R,T) be a DM-multiring and denote
Sm(R,T)=(R/_mT).
Then Sm(R) is a special hyperfield (thus Sm(R,T)^× is a special group).
[Theorem 3.9 of <cit.>]
Let F be a hyperfield satisfying DM0-DM2. Then F satisfies DM3 if and only if satisfies SMF4. In other words, F is a DM-hyperfield if and only if it is a special hyperfield.
In this sense, we define the following category:
A pre-special hyperfield is a hyperfield satisfying DM0, DM1 and DM2. In other words, a pre-special hyperfield is a hyperbolic hyperfield F such that for all a∈Ḟ, a^2=1 and (1-a)(1-a)⊆1-a.
The category of pre-special hyperfields will be denoted by PSMF.
Let G be a pre-special group and consider (M(G),+,-,0,1), with operations defined by
2
* a· b=0 a=0b=0
a· b
* -(a)=(-1)· a
* a+b={b} a=0
{a} b=0
M(G) a=-b, a0
D_G(a,b)
Then M(G) is a pre-special multifield. Conversely, if F is a pre-special multifield then (Ḟ,≡_F,-1) is a pre-special group, where
⟨ a,b⟩≡_F⟨ c,d⟩ab=cda∈ c+d.
We finish this section stating the following result established in <cit.>
[Arason-Pfister Hauptsatz]
Let F be a special hyperfield, then it holds AP_F(n), for all n ≥ 0. In more details: for each n ≥ 0 and For each φ = ⟨ a_1,⋯, a_k ⟩, a non-empty (k≥ 1), regular (a_i ∈Ḟ) and anisotropic form, if φ∈ I^n(F), then (φ)≥2^n φ∈ I^n(F), if φ≠∅ is anisotropic, then _W,F(φ)≥2^n.
§.§ The K-theory for Multifields/Hyperfields
In this section we describe the notion of K-theory of a hyperfield, introduced in <cit.> by essentially repeating the construction in <cit.> replacing the word “field” by “hyperfield” and explore some of this basic properties. Apart from the obvious resemblance, more technical aspects of this new theory can be developed (but with other proofs) in multistructure setting in parallel with classical K-theory.
For a hyperfield F, K_*F is the graded ring
K_*F=(K_0F,K_1F,K_2F,...)
defined by the following rules: K_0F:=ℤ. K_1F is the multiplicative group Ḟ written additively.
With this purpose, we fix the canonical “logarithm” isomorphism
ρ:Ḟ→ K_1F,
where ρ(ab)=ρ(a)+ρ(b). Then K_nF is defined to be the quotient of the tensor algebra
K_1F⊗ K_1F⊗...⊗ K_1F (n )
by the (homogeneous) ideal generated by all ρ(a)⊗ρ(b), with a, b 0 and b∈1-a.
In other words, for each n≥2,
K_nF:=T^n(K_1F)/Q^n(K_1(F)),
where
T^n(K_1F):=K_1F⊗_ℤ K_1F⊗_ℤ...⊗_ℤ K_1F
and Q^n(K_1(F)) is the subgroup generated by all expressions of type ρ(a_1)⊗ρ(a_2)⊗...⊗ρ(a_n) such that a_i+1∈1-a_i for some i with 1≤ i ≤ n-1.
To avoid carrying the overline symbol, we will adopt all the conventions used in Dickmann-Miraglia's K-theory (<cit.>). Just as it happens with the previous K-theories, a generic element η∈ K_nF has the pattern
η=ρ(a_1)⊗ρ(a_2)⊗...⊗ρ(a_n)
for some a_1,...,a_n∈Ḟ, with a_i+1∈1-a_i for some 1≤ i< n. Note that if F is a field, then “b∈1-a” just means
b=1-a, and the hyperfield and Milnor's K-theory for F coincide.
The very first task, is to extend the basic properties valid in Milnor's and Dickmann-Miraglia's K-theory to ours. Here we already need to
restrict our attention to hyperbolic hyperfields (ℋℳℱ):
Let F be an hyperbolic hyperfield. Then
* ρ(1)=0.
* For all a∈Ḟ, ρ(a)ρ(-a)=0 in K_2F.
* For all a,b∈Ḟ, ρ(a)ρ(b)=-ρ(b)ρ(a) in K_2F.
* For every a_1,...,a_n∈Ḟ and every permutation σ∈ S_n,
ρ(a_σ 1)...ρ(a_σ i)...ρ( a_σ n)=(σ)ρ(a_1)...ρ(a_n)K_nF.
* For every ξ∈ K_mF and η∈ K_nF, ηξ=(-1)^mnξη in K_m+nF.
* For all a∈Ḟ, ρ(a)^2=-ρ(a)ρ(-1).
* Is an immediate consequence of the fact that ρ is an isomorphism.
* Since F hyperbolic, 1-1=F. Then -a^-1∈1-1 for all a∈Ḟ, and hence, -1∈-1+a^-1. Multiplying this by
a, we get -a∈1-a. By definition, this imply ρ(a)ρ(-a)=0.
* By item (b), ρ(ab)ρ(-ab)=0 in K_2F. But
ρ(ab)ρ(-ab) =ρ(a)ρ((-a)b)+ρ(b)ρ((-b)a)
=ρ(a)ρ(-a)+ρ(a)ρ(b)+ρ(b)ρ(-b)+ρ(b)ρ(a)
=ρ(a)ρ(b)+ρ(b)ρ(a).
From ρ(a)ρ(b)+ρ(b)ρ(a)=ρ(ab)ρ(-ab)=0, we get the desired result ρ(a)ρ(b)=-ρ(b)ρ(a) in K_2F.
* This is a consequence of item (c) and an inductive argument.
* This is a consequence of item (d) and an inductive argument, using the fact that an element in K_nF has a pattern
η=ρ(a_1)⊗ρ(a_2)⊗...⊗ρ(a_n)
for some a_1,...,a_n∈Ḟ, with a_i+1∈ 1-a_i for some 1≤ i< n.
* Direct consequence of item (a).
An element a∈Ḟ induces a morphism of graded rings ω^a={ω^a_n}_n≥1:K_*F→ K_*F of degree 1, where ω^a_n:K_nF→ K_n+1F is the multiplication by ρ(a). When a=-1, we write
ω={ω_n}_n≥1={ω^-1_n}_n≥1=ω^-1.
Let F,K be hyperbolic hyperfields and φ:F→ L be a morphism. Then φ induces a morphism of graded rings
φ_*={φ_n:n≥0}:K_*F→ K_*L,
where φ_0=Id_ℤ and for all n≥1, φ_n is given by the following rule on generators
φ_n(ρ(a_1)...ρ(a_n))=ρ(φ(a_1))...ρ(φ(a_n)).
Moreover if φ is surjective then φ_* is also surjective, and if ψ:L→ M is another morphism then
* (ψ∘φ)_*=ψ_*∘φ_* and Id_*=Id.
* For all a∈Ḟ the following diagram commute:
@!=5pcK_nF[d]_φ_n[r]^ω^a_n K_n+1F[d]^φ_n+1
K_nL[r]_ω^φ(a)_n K_n+1L
*
For all n≥1 the following diagram commute:
@!=5pcK_nF[d]_φ_n[r]^ω^-1_n K_n+1F[d]^φ_n+1
K_nL[r]_ω^-1_n K_n+1L
Firstly, note that φ extends to a function φ_1:K_1F→ K_1L given by the rule
φ_1(ρ(a))=ρ(φ(a)).
Certainly φ_1 is a morphism because
φ_1(0)=φ_1(ρ(1))=ρ(φ(1))=ρ(1)=0,
and for all ρ(a),ρ(b)∈ K_1F,
φ_1(ρ(a)+ρ(b))=φ_1(ρ(ab))=ρ(φ(ab))=ρ(φ(a)φ(b))=ρ(φ(a))+ρ(φ(b)).
Proceeding inductively, for all n≥1 we extend φ to a function
φ_n:∏^n_i=1K_1F→ K_nL given by the rule
φ(ρ(a_1),...,ρ(a_n)):=φ_1(ρ(a_1))...φ_1(ρ(a_n))=ρ(φ(a_1))...ρ(φ(a_n)).
Then if i=1,...,n and b_i∈ k_1F we have
φ_n(ρ(a_1),...,ρ(a_i)+ρ(b_i),...,ρ(a_n))=φ_n(ρ(a_1),...,ρ(a_ib_i),...,ρ(a_n))=
ρ(φ(a_1))...ρ(φ(a_ib_i)...ρ(φ(a_n))=ρ(φ(a_1))...ρ(φ(a_i)φ(b_i))...ρ(φ(a_n))=
ρ(φ(a_1)...[ρ(φ(a_i)+φ(b_i))]...ρ(φ(a_n))=
ρ(φ(a_1))...ρ(φ(a_i))...ρ(φ(a_n))+ρ(φ(a_1))...ρ(φ(b_i))...ρ(φ(a_n))=
φ_n(ρ(a_1),...,ρ(a_i),...,ρ(a_n))
+φ_n(ρ(a_1),...,ρ(b_i),...,ρ(a_n)),
then for each n, φ_n:∏^n_i=1K_1F→ K_nL is multilinear and by the universal property of tensor product there is an unique morphism
φ̃_n:⊗^n_j=1K_1F→ K_nL
extending φ_n. By construction (and using the fact that φ is a morphism), (φ̃_n)=Q^n(K_1F), which provides an unique morphism
φ_n:T^n(K_1F)/Q^n(K_1(F)→ K_nL such that φ̃_n=φ_n∘π_n, where π_n is the canonical projection T^n(K_1F) in Q^n(k_1F). Then taking φ_0=Id_ℤ, we get a morhism φ_*:K_*F→ K_*L, given by φ_*={φ_n:n≥0}.
For items (a) and (b), it is enough to note that these properties holds for φ̃_n, n≥0, and after the application of projection, we get the validity for φ_n=π_n∘φ̃_n.
Item (c) follows by the same argument of items (a) and (b), noting that φ(1)=1 imply φ(-1)=-1. By abuse of notation, we denote
φ_*={φ_n:n≥0}={φ_n:n≥0}.
We also have the reduced K-theory graded ring k_*F=(k_0F,k_1F,...,k_nF,...) in the hyperfield context, which is defined by the rule k_nF:=K_nF/2K_nF for all n≥0. Of course
for all n≥0 we have an epimorphism q:K_nF→ k_nF simply denoted by q(a):=[a], a∈ K_nF. It is immediate that k_nF is additively generated by {[ρ(a_1)]..[ρ(a_n)]:a_1,...,a_n∈Ḟ}. We simply denote such a generator by ρ̃(a_1)...ρ̃(a_n) or even ρ(a_1)...ρ(a_n) whenever the context allows it.
We also have some basic properties of the reduced K-theory, which proof is just a translation of 2.1 of <cit.>:
Let F be a hyperbolic hyperfield, x,y,a_1,...,a_n∈Ḟ and σ be a permutation on n elements.
* In k_2F, ρ(a)^2=ρ(a)ρ(-1). Hence in k_mF,
ρ(a)^m=ρ(a)ρ(-1)^m-1, m≥2;
* In k_2F, ρ(a)ρ(b)=ρ(b)ρ(a);
* In k_nF,
ρ(a_1)ρ(a_2)...ρ(a_n)=ρ(a_σ 1)ρ(a_σ
2)...ρ(a_σ n);
* For n≥1 and ξ∈ k_nF, ξ^2=ρ(-1)^nξ;
* If F is a real reduced hyperfield, then x∈1+y and ρ(y)ρ(a_1)...ρ(a_n)=0 implies
ρ(x)ρ(a_1)ρ(a_2)...ρ(a_n)=0.
Moreover the results in Proposition <ref> continue to hold if we took φ_*={φ_n:n≥0}:k_*F→ k_*L.
Let F be a (hyperbolic) hyperfield and T⊆ F be a multiplicative subset such that F^2⊆ T. Then, for each n ≥ 1
K_n(F/_m T^*)≅ k_n(F/_mT^*).
Since F^2⊆ T, for all a∈ (F/_mT^*)^× we have
0= ρ(1) = ρ(a^2)=ρ(a)+ρ(a).
Then, for each n ≥ 1, 2K_n(F/_mT^*)=0 and we get K_n(F/_m T^*)≅ k_n(F/_mT^*).
Let F be a hyperbolic hyperfield and T⊆ F be a multiplicative subset such that F^2⊆ T. Then there is an induced surjective morphism
k(F)→ k(F/_mT^*).
Moreover, if T = F^2, then
k(F)
≅→ k(F/_mḞ^2).
§ INDUCTIVE GRADED RINGS: AN ABSTRACT APPROACH
After the three K-theories defined in the above sections, it is desirable (or, at least, suggestive) the rise of an abstract environment that encapsule all them, and of course, provide an axiomatic approach to guide new extensions of the concept of K-theory in the context of the algebraic and abstract theories of quadratic forms. The inductive graded rings fits this purpose. Here we will present three versions. The first one is:
An inductive graded ring (or Igr for short) is a structure R=((R_n)_n≥0,(h_n)_n≥0,∗_nm) where
* R_0≅𝔽_2.
* R_n has a group structure (R_n,+,0,⊤_n) of exponent 2 with a distinguished element ⊤_n.
* h_n:R_n→ R_n+1 is a group homomorphism such that h_n(⊤_n)=⊤_n+1.
* For all n≥1, h_n=∗_1n(⊤_1,_).
* The binary operations ∗_nm : R_n × R_m → R_n+m, n, m ∈ℕ induces a commutative ring structure on the abelian group
R=⊕_n≥0R_n
with 1=⊤_0.
* For 0≤ s≤ t define
h^t_s=Id_R_ss=t
h_t-1∘...∘ h_s+1∘ h_ss<t.
Then if p≥ n and q≥ m, for all x∈ R_n and y∈ R_m,
h^p_n(x)∗ h^q_m(y)=h^p+q_n+m(x∗ y).
A morphism between Igr's R and S is a pair f=(f,(f_n)_n≥0) where f_n:R_n→ S_n is a morphism of pointed groups and
f=⊕_n≥0f_n:R→ S
is a morphism of commutative rings with unity. The category of inductive graded rings (in first version) and their morphisms will be denoted by .
A first consequence of these definitions is that: if
f:((R_n)_n≥0,(h_n)_n≥0,∗_nm)→ ((S_n)_n≥0,(l_n)_n≥0,∗_nm)
is a morphism of Igr's then f_n+1∘ h_n=l_n∘ f_n.
@!=2.5pcR_0[r]^h_0[d]_f_0 R_1[r]^h_1[d]_f_1 R_2[r]^h_2[d]_f_2
...[r]^h_n-1 R_n[r]^h_n[d]_f_n R_n+1[r]^h_n+1[d]_f_n+1 ...
S_0[r]^l_0 S_1[r]^l_1 S_2[r]^l_2 ...[r]^l_n-1 S_n[r]^l_n S_n+1[r]^l_n+1
...
In fact, since R_0 ≅𝔽_2 ≅ S_0 and f(1) =1, then f_0:R_0→ S_0 is the unique abelian group isomorphism and f_1∘ h_0=l_0∘ f_0. If n≥1, for all a_n∈ R_n holds
f_n+1∘ h_n(a_n) =f_n+1∘(∗_1n(⊤_1,a_n))=f_1(⊤_1)∗_1nf_n(a_n)
=⊤_1∗_1nf_n(a_n)=l_n(f_n(a_n))=l_n∘ f_n(a_n).
* Let F be a field of characteristic not 2. The main actors here are WF, the Witt ring of F and IF, the fundamental ideal of WF. Is well know that I^nF, the n-th power of IF is additively generated by n-fold Pfister forms over F. Now, let R_0=WF/IF≅𝔽_2
and R_n=I^nF/I^n+1F. Finally, let h_n=_⊗⟨1,1⟩. With these prescriptions we have an inductive graded ring R associated to F.
* The previous example still works if we change the Witt ring of a field F for the Witt ring of a (formally real) special group G.
Concerning k-theories, we register the followings:
* Let F be a field. Then k^mil_*F (the reduced Milnor K-theory) is an inductive graded ring.
* Let G be a special group. Then k^dm_*G (the Dickmann-Miraglia K-theory of G) is an inductive graded ring.
* Let F be a hyperbolic hyperfield. Then k^mult_*F (our reduced K-theory) is an inductive graded ring.
[Theorem 2.5 in <cit.>]
Let F be a field. The functor G:Field_2→ SG provides a functor k'^dm_*:Field_2→ (the special group
K-theory functor) given on the objects by k'^dm_*(F):=k^dm_*(G(F)) and on the morphisms f:F→ K by k'^dm_*(f):=G(f)_* (in the sense of Lemma 3.3 of <cit.>). Moreover, this functor commutes with the functors G and k, i.e, for all F∈ Field,
k'^dm_*(F) = k^dm_*(G(F))≅ k^mil_*(F).
Let G be a special group. The equivalence of categories M:SG→ SMF induces a functor k'^mult_*:SG→ given on the objects by k'^mult_*(G):=k^mult_*(M(G)) and on the morphisms f:G→ H by
k'^mult_*(f):=k^mult_*(M(f)). Moreover, this functor commutes with M and k^dm, i.e, for all G∈ SG, k'^mult_*(G) = k^mult_*(M(G))≅ k^dm_*(G).
[Interchanging K-theories Formulas]
Let F∈ Field_2. Then
k^mil(F)≅ k^dm(G(F))≅ k^mult(M(G(F))).
If F is formally real and T is a preordering of F, then
k^dm(G_T(F))≅ k^mult(M(G_T(F))).
Moreover, since M(G(F))≅ F/_mḞ^2 and M(G_T(F))≅ F/_mT^*, we get
k^mil(F) ≅ k^dm(G(F))≅ k^mult(F/_mḞ^2)
k^dm(G_T(F)) ≅ k^mult(F/_mT^*).
There is an alternative definition for with a first-order theoretic flavor. It is a technical framework that allows achieving some model-theoretic results.
Before define it, we need some preparation. First of all, we set up the language. Here, we will work with the poli-sorted framework (as established in chapter 5 of <cit.>), which means the following:
Let S be a set (of sorts). For each s∈ S assume a countable set _s of variables of sort s (with the convention if s t then _s∩_t=∅). For each sort s∈ S an equality symbol =_s (or just =); the connectives , ∧, ∨, → (not, and, or, implies); the quantifiers ∀, ∃ (for all, there exists).
A finitary S-sorted language (or signature) is a set ℒ=(𝒞,ℱ,ℛ) where:
* 𝒞 is the set of constant symbols. For each c∈𝒞 we assign an element s∈ S, the sort of c;
* ℱ is the set of functional symbols. For each f∈ℱ we assign elements s,s_1,...,s_n∈ S, we say that f has arity s_1×...× s_n and s is the value sort of f; and we use the notation f:s_1×...× s_n→ s.
* ℛ is the set of relation symbols. c∈𝒞 we assign elements s_1,...,s_n∈ S, the arity of R; and we say that R has arity s_1×...× s_n.
A ℒ-structure ℳ is, in this sense, prescribed by the following data:
* The domain or universe of ℳ, which is an S-sorted set |ℳ|:=(M_s)_s∈ S.
* For each constant symbol c∈𝒞 of arity s, an element c^ℳ∈ M_s.
* For each functional symbol f∈ℱ, f:s_1×...× s_n→ s, a function
f^ℳ:M_s_1×...× M_s_n→ M_s.
* For each relation symbol R∈ℛ of arity s_1×...× s_n a relation, i.e. a subset R^ℳ⊆ M_s_1×...× M_s_n.
A ℒ-morphism φ:ℳ→𝒩 is a sequence of functions φ = (φ_s)_s :|ℳ|→|𝒩| such that
* for all c∈𝒞 of arity s, φ_s(c^ℳ)=c^𝒩;
* for all f:s_1×...× s_n→ s, if (a_1,...,a_n)∈ :M_s_1×...× M_s_n, then φ_s(f^ℳ(a_1,...,a_n))=f^𝒩(φ_s_1(a_1),...,φ_s_n(a_n));
* for all R of arity s_1×...× s_n, if (a_1,...,a_n)∈ R^ℳ then (φ(a_1),...,φ(a_n))∈ R^𝒩.
The category of ℒ-structures and ℒ-morphism in the poli-sorted language ℒ will be denoted by _s(ℒ).
The terms, formulas, occurrence and free variables definitions for the poli-sorted case are similar to the usual (single-sorted) first order ones. For example, the terms are defined as follows:
* variables x∈_s and constants c∈ C_s are terms of value sort s;
* if s⃗=⟨ s_1,...,s_n,s⟩∈ S^n+1, f∈ℱ with f:s_1×...× s_n→ s, and τ_1,...,τ_n are terms of value sorts s_1,...,s_n respectively, then f(τ_1,...,τ_n) is a term of sort s.
As usual, we may write τ : s to indicate that the term τ has value sort s.
For the formulas:
* if x,y∈_s then x=y is a formula; if s⃗=⟨ s_1,...,s_n⟩∈ S^n, R∈ℛ of arity s_1×...× s_n and τ_1,...,τ_n are terms of sort s_1,...,s_n respectively, then R(τ_1,...,τ_n) is a formula. These are the atomic formulas.
* If φ_1,φ_2 are formulas, then φ_1, φ_1∧φ_2, φ_1∨φ_2 and φ_1→φ_2 are formulas.
* If φ is a formula and x∈_s (s∈ S), then ∀ x φ and ∃ x φ are formulas.
In our particular case, the set of sorts will be just ℕ. Then, for each n,m≥0, we set the following data:
* 0_n,⊤_n are constant symbols of arity n. We use 0_0=0 and ⊤_0=1.
* +_n:n× n→ n is a binary operation symbol.
* h_n:n→(n+1) and ∗_n,m:n× m→(n+m) are functional symbols.
The (first order) language of inductive graded rings ℒ_igr is just the following language (in the poli-sorted sense):
ℒ_igr:=
{0_n,⊤_n,+_n,h_n,∗_nm:n,m≥0}.
The (first order) theory of inductive graded rings T(ℒ_igr) is the ℒ_igr-theory axiomatized by the following ℒ_igr-sentences, where we use ·_n:0× n→ n as an abbreviation for ∗_0n:
* For n≥0, sentences saying that “+_n,0_n,⊤_n induces a pointed left 𝔽_2-module”:
∀ x:n∀ y:n∀ z:n((x+_ny)+_nz=x+_n(y+_nz))
∀ x:n(x+_n0_n=x)
∀ x:n∀ y:n(x+_ny=y+_nx)
∀ x:n(x+_nx=0_n)
∀ x:n(1·_n x=x)
∀ x:n∀ y:n∀ a:0(a·_n(x+_ny)=a·_n x+_na·_n y)
∀ x:n∀ a:0∀ b:0((a+_0b)·_n x=a·_n x+_nb·_n x)
* For n≥0, sentences saying that “h_n is a pointed 𝔽_2-morphism”:
∀ x:n∀ y:n(h_n(x+_ny)=h_n(x)+_n+1h_n(y))
∀ x:n∀ a:0(h_n(a·_n x)=a·_n h_n(x))
h_n(⊤_n)=⊤_n+1
* Sentences saying that “R_0≅𝔽_2”:
0_0 ≠⊤_0
∀ x:n(x=0_0∨ x=⊤_0)
* Using the abbreviation ∗_n,m(x,y)=x∗_n,my, we write for n,m≥0 sentences saying that “∗_n,m is a biadditive function compatible with h_n”:
∀ x:n∀ y:n∀ z:m(((x+_ny)∗_nmz)=(x∗_mnz+_n+my∗_nmz))
∀ x:n∀ y:m∀ z:m((x∗_mn(y+_mz))=(x∗_nmy+_n+mx∗_nmz))
∀ x:n∀ y:m (h_n+m(x∗_nmy) = h_n(x)∗_nm h_m(y))
* Sentences describing “the induced ring with product induced by ∗_n,m, n,m≥0”:
∀ x:n∀ y:m∀ z:p((x∗_n,my)∗_(m+n),pz=x∗_n,(m+p)(y∗_m,p z))
∀ x:n∀ y:m(x∗_n,my=y∗_m,nx)
* For n≥1, sentences saying that “h_n=⊤_1∗_1n_”:
∀ x:n(h_n(x)=⊤_1∗_1nx)
Now we are in a position to define another version of Igr:
An inductive graded ring (or (Igr) for short) is a model for T(ℒ_igr), or in other words, a ℒ_igr-structure ℛ such that ℛ_ℒ_igrT(ℒ_igr). We denote by _2 the category of ℒ_igr-structures and ℒ_igr-morphisms.
Again, after some straightforward calculations we can check:
The categories , _2 are equivalent.
Following a well-known procedure, it is possible to correspond theories on poly-sorted first-order languages with theories on traditional (single-sorted) first-order languages in such a way that the corresponding categories of models are equivalent. This allows a useful interchanging between model-theoretic results, in both directions. In particular, in the following, we will freely interchange the three notions of Igr indicated in this section.
Theorem <ref> gives a hint that the category of Igr is a good abstract environment for studying questions of "quadratic flavour". So a better understanding of categories of Igr's and its applications to quadratic forms theories is the main purpose of the next sections in this work.
§ THE FIRST PROPERTIES OF IGR
In this section we discuss the theory of Igr's. Constructions like products, limits, colimits, ideals, quotients, kernel and image are not new and are obtained in a very straightforward manner (basically, putting those structures available for rings in a "coordinatewise" fashion), then in order to gain speed, we will present these facts leaving more detailed proofs to the reader.
Denote: p𝔽_2-mod the category of pointed 𝔽_2-modules, the category of commutative rings with unity and morphism that preserves these units and _2 the full subcategory of the associative 𝔽_2-algebras. We have a functorial correspondence _2 →, given by the following diagram:
@!=2pc@ [dr] | ↦
A[d]_f 𝔽_2[r]^![d]_id
A[r]^id[d]_f A[r]^id[d]_f ...[r]^id A[r]^id[d]_f ...
B 𝔽_2[r]^! B[r]^id B[r]^id ...[r]^id B[r]^id ...
Here A is a p𝔽_2-mod where ⊤_n =1, n ≥ 1 and ⊤_0 =1 ∈𝔽_2.
The trivial graded ring functor 𝕋:_2→ is the functor defined for f:A→ B by T(A)_0:=𝔽_2, T(f)_0:=id_𝔽_2 and for all n≥1 we set T(A)_n=A and T(f)_n:=f.
We define the associated 𝔽_2-algebra functor 𝔸:→_2 is the functor defined for f:R→ S by
𝔸(R):=R_𝔸=_n≥0R_n𝔸(f)=f_𝔸:=_n≥0f_n.
More explicitly, 𝔸(R)=(R_𝔸,0,1,+_𝔸,·), where
* R_𝔸=_n≥0R_n,
* 0=[(0,0)] and 1=[(1,0)],
* given [(a_n,n)],[(b_m,m)]∈ R_𝔸 and setting d≥ m,n we have
[(a_n,n)]+[(b_m,m)]=[(h_nd(a_n)+h_md(b_m),d)]
* given [(a_n,n)],[(b_m,m)]∈ R_𝔸, we have
[(a_n,n)]·[(b_m,m)]=[(a_n∗_nmb_m,n+m)].
* The functor 𝔸 is the left adjunct to 𝕋.
* The functor 𝕋 is full and faithful.
* The composite functor 𝔸∘𝕋 is naturally isomorphic to the functor 1__2.
Let R∈ Igr. We have
𝕋(𝔸(R))=𝕋(_m≥0R_m).
In other words, for all n≥1
𝕋(_m≥0R_m)_n:= _m≥0R_m.
Then, for all n≥1 we have a canonical embedding
η(R)_n:R_n→_m≥0R_m=𝕋(_m≥0R_m)_n,
providing a morphism
η(R):R→_m≥0R_m=𝕋(_m≥0R_m).
For f∈(R,S), taking n≥1 we have a commutative diagram
@!=5pcR_n[r]^f_n[d]_η(R)_n S_n[d]^η(S)_n
*[l]_m≥0R_m[r]__m≥0f_m **[r]_m≥0S_m
with the convention that η(R)_0=id_𝔽_2. Then it is legitimate the definition of a natural transformation η:1_→𝕋∘𝔸 given by the rule R↦η(R).
Now let A∈ Ring_2 and g∈ Ring_2(R,𝕋(A)). Then for each n≥0, there is a morphism g_n:R_n→𝕋(A)_n=A and by the universal property of inductive limit we get a morphism
_m≥0g_n:_m≥0R_m→ A.
In fact, _m≥0g_n=𝔸(g).
Now, using the fact that η(R)_n is the morphism induced by the inductive limit we have for all n≥0 the following commutative diagram
@!=5pc
R_n[dr]_g_n[r]^η(B)_n **[r]_m≥0R_m[d]^_m≥0g_n
A
In other words, η(B)_n is the canonical morphism commuting the diagram
@!=5pc
R_n[dr]_g_n[r]^η(B)_n **[r]𝕋(𝔸(R))[d]^𝕋(𝔸(g_n))
𝕋(A)
and hence, 𝔸 is the left adjoint of 𝕋, proving item (i). By the very definition of 𝔸 and 𝕋 we get item (iii), and using Proposition <ref> we get item (ii).
Using Proposition <ref> (and its dual version) we get the following Corollary.
* 𝕋:_2→ preserves all projective limits.
* If I is such that is I-inductively complete then for {A_i}_i∈ I in we have
_i∈ IA_i≅𝔸(_i∈ I𝕋(A_i)).
* 𝔽_2∈_2 is the initial object in _2.
* 0∈_2 is the terminal object in _2.
* 𝕋(𝔽_2) is the initial object in .
* 𝕋(0) is the terminal object in .
Now we discuss (essentially) the limits and colimits in Igr. Fix a non-empty set I and let {(R_i,⊤_i,h_i)}_i∈ I be a family of Igr's. We start with the construction of the Igr-product
R=∏_i∈ IR_i.
For this, we define R_0≅𝔽_2 and for all n≥1, we define
R_n:=∏_i∈ I(R_i)_n⊤_n:=∏_i∈ I(⊤_i)_n.
In the sequel, we define h_0:𝔽_2→ R_1 as the only possible morphism and for n≥1, we define h_n:R_n→ R_n+1 by
h_n:=∏_i∈ I(h_i)_n.
* The space of orderings, X_R, of the Igr R, is the set of Igr-morphisms Igr(R, 𝕋(𝔽_2). By the Proposition <ref>.(i), we have a natural bijection Igr(R, 𝕋(𝔽_2) ≅ Ring_2(𝔸(R), 𝔽_2), thus considering the discrete topologies on the 𝔽_2-algebras 𝔸(R), 𝔽_2) and transporting the boolean topology in Ring_2(𝔸(R), 𝔽_2), we obtain a boolean topology on the space of orderings X_R = Igr(R, 𝕋(𝔽_2)).
* The boolean hull, B(R), of the Igr R, is the boolean ring canonically associated to the space of orderings of R by Stone duality: B(R) := 𝒞(X_R, 𝔽_2).
* A Igr R is called formally real if X_R ≠∅ (or, equivalently, if B(R) ≠ 0).
Let I be a non-empty set and {(R_i,h_i)}_i∈ I be a family of Igr's. Then
R=∏_i∈ IR_i
with the above rules is an Igr. Moreover it is the product in the category .
Using Definition <ref> is straightforward to verify that (R,⊤_n,h_n) is an Igr. Note that for each i∈ I, we have an epimorphism π_i:R→ R_i given by the following rules: for each n≥0 and each (x_i)_i∈ I∈ R_n, we define
(π_i)_n((x_i)_i∈ I):=x_i.
Now, let (Q,{q_i}_i∈ I) be another pair with Q being an Igr and q_i:Q→ R_i being a morphism for each i∈ I. Given i∈ I and n≥0, since R_n:=∏_i∈ I(R_i)_n is the product in the category of pointed 𝔽_2-modules, we have an unique morphism (q)_n:(Q)_n→(R)_n such that (π_i)_n∘(q)_n=(q_i)_n. Set q_n:= ((q_i)_i ∈ I)_n. By construction, q is the unique Igr-morphism such that π_i∘ q=q_i, completing the proof that R is in fact the product in the category .
* Let R be an Igr and let X ⊆ R =
n∈ℕ⊕ R_n. Then there exists the inductive graded subring generated by X (notation : [X]i_X↪ R): this is the least inductive graded subring of R such that ∀ n ∈ℕ, X∩R_n⊆ [ X ]_n.
* Let ℐ be a small category and ℛ : ℛ→ Igr be a diagram. Then there exists _i∈ℐℛ_i in the category Igr.
* It is enough consider S_X , the 𝔽_2-subalgebra of (⊕_n ∈ℕR_n,∗) generated by X∪{⊤_1}⊆⊕_n ∈ℕR_n and set ∀ n ∈ℕ, [X]_n := s_x∩R_n.
* Just define _i∈ℐℛ_i as the inductive graded subring of ∏_i ∈ obj(ℐ)ℛ_i generated by X_D = ⊕_n ∈ℕ X_n and X_n := _i ∈ℐ (ℛ_i)_n (projective limit of pointed 𝔽_2-algebras).
Now we construct the Igr-tensor product of a finite family of Igr's, {R_i : i ∈ I}
R=⊗_i∈ IR_i.
For this, we define R_0≅𝔽_2 and for all n≥1, we define
R_n:=⊗_i∈ I(R_i)_n,
(⊗_i ∈ I a_i) ∗_n,k (⊗_i ∈ I b_i) := ⊗_i ∈ I (a_i ∗^i_n,k b_i)
⊤_n:=⊗_i∈ I(⊤_i)_n.
In particular, if I = ∅, then R_n = {0}, n ≥ 1. In the sequel, we define h_0:𝔽_2→ R_1 as the only possible morphism and for n≥1, we define h_n:R_n→ R_n+1 by
h_n:=⊗_i∈ I(h_i)_n.
In other words, for a generator ⊗_i∈ Ix_i∈ R_n, we have
h_n(⊗_i∈ Ix_i):=⊗_i∈ I(h_i)_n(x_i).
Let I be a finite set and {(R_i,h_i)}_i∈ I be a family of Igr's. Then
R=⊗_i∈ IR_i
with the above rules is an Igr. Moreover it is the coproduct in the category .
Now suppose that (I, ≤) is an upward directed poset and that ((R_i,h_i),φ_ij)_i≤ j∈ I is an inductive system of Igr's. We define the inductive limit
R=_i∈ IR_i
by the following: for all n≥0 define
R_n:=_i∈ I(R_i)_n.
Note that
R_0:=_i∈ I(R_i)_0≅_i∈ I𝔽_2≅𝔽_2.
In the sequel, for n≥1 we define h_n:R_n→ R_n+1 by
h_n:=_i∈ I(h_i)_n.
Let (I, ≤) is an upward directed poset and ((R_i,h_i),φ_ij)_i∈ I be a directed family of Igr's. Then
R=_i∈ IR_i
with the above rules is an Igr. Moreover, it is the inductive limit in the category .
The general coproduct (general tensor product) of a family {R_i :i ∈ I} in the category Igr is given by the combination of constructions:
⊗_i∈ IR_i := _I' ∈ P_fin(I)⊗_i∈ I'R_i.
After discussing directed inductive colimits and coproducts, we will deal with ideals, quotients, and coequalizers.
Given R∈ and (J_n)_n≥0 where J_n⊆ R_n for all n≥0. We say that J is a graded ideal of R where
J:=⊕_n≥0J_n⊆⊕_n≥0R_n
is an ideal of (R,∗).
In particular, for all n≥0, J_n⊆ R_n is a graded 𝔽_2-submodule of (R_n,+_n,0_n). For each X⊆ R, there exists the ideal generated by X, denoted by ⟨ X⟩. It is the smaller graded ideal of R such that for all n≥0, (X∩ R_n)⊆[X]_n. For this, just consider ⟨ X⟩, the ideal of (R,∗) generated by X⊆ R and define ⟨ X⟩_n:=⟨ X⟩∩ R_n.
Let R,S be Igr's and f:R→ S be a morphism. We define the kernel of f, notation (f) by
(f)_n:={ x∈ R_n:f_n(x)=0}
and image of f, notation (f) by
(f)_n:={ f_n(x)∈ S_n:x∈ R_n}.
Of course, (f)⊆ R is an ideal and (f)⊆ S is an Igr.
Given R∈ and J=(J_n)_n≥0 a graded ideal of R, we define R/J∈, the quotient inductive graded ring of R by J: for all n≥0, (R/J)_n:=R_n/J_n, where the distinguished element is ⊤_n+_nJ_n. We have a canonical projection q_J:R→ R/J, “coordinatewise surjective” and therefore, an -epimorphism.
Let R,S be Igr's and f:R→ S be a morphism. Then there exist an unique monomorphism f:R/(f)→ S commuting the following diagram:
@!=5pcR[r]^f[d]_q S
R/(f)[ur]_f
where q is the canonical projection. In particular R/(f)≅(f).
Let R gf⇉ S be Igr-morphisms and consider q_J : S → S/J the quotient morphism where J := ⟨ X ⟩ is the graded ideal generated by X_n := { f_n(a) - g_n(a): a ∈ R_n}, n zin ℕ. Then q_J is the coequalizer of f, g.
Given R,S∈ and f∈(R,S).
* f is a -monomorphism whenever for all n≥0 f_n:R_n→ S_n is a monomorphism of pointed 𝔽_2-modules iff for all n≥0, f_n:R_n→ S_n is an injective homomorphism of pointed 𝔽_2-modules.
* f is a -epimorphism whenever for all n≥0 f_n:R_n→ S_n is a epimorphism of pointed 𝔽_2-modules iff for all n≥0, f_n:R_n→ S_n is a surjective homomorphism of pointed 𝔽_2-modules.
* f is a -isomorphism iff for all n≥0 f_n:R_n→ S_n is a isomorphism of pointed 𝔽_2-modules iff for all n≥0, f_n:R_n→ S_n is a bijective homomorphism of pointed 𝔽_2-modules.
We denote _fin the full subcategory of such that
(_fin)={ R∈():| R_n|<ωn≥1}.
Of course,
{ R∈():|⊕_n≥1R_n|<ω}(_fin),
for example, in <ref>(a), if F is a Euclidian field (for instance, any real closed field), then n ∈ℕ⊕I^nF/I^n+1F ≅𝔽_2[x], thus the graded Witt ring of F (see definition <ref>) W_*(F)∈(_fin) but 𝔽_2[x] is not finite.
§ RELEVANT SUBCATEGORIES OF
The aim of this section is to define subcategories of Igr that mimetize the following two central aspects of K-theories:
* The K-theory graded ring is "generated" by K_1;
* The K-theory graded ring is defined by some convenient quotient of a graded tensor algebra.
Our desired category will be the intersection of two subcategories. The first one is obtained after we define the graded subring generated by the level 1 functor
1:→.
We define it as follow: for an object R=((R_n)_n≥0,(h_n)_n≥0,∗_nm),
* 1(R)_0:=R_0≅𝔽_2,
* 1(R)_1:=R_1,
* for n≥2,
1(R)_n :={x∈ R_n:x=∑^r_j=1a_1j∗_11...∗_11a_nj,
a_ij∈ R_1, 1≤ i≤ n, 1≤ j≤ rr≥1}.
Note that for all n≥2, R_n is generated by the expressions of type
d_1∗_11d_2∗_11...∗_11d_n, d_i∈ R_1, i=1,...,n.
Of course, 1(R) provides an inclusion ι_1(R):1(R)→ R in the obvious way.
On the morphisms, for f∈(R,S), we define 1(f)∈(1(R),1(S)) by the restriction 1(f)=f↿_1(R). In other words, 1(f) is the only Igr-morphisms that makes the following diagram commute:
@!=6pc1(R)[r]^ι_1(R)[d]_1(f) R[d]^f
1(S)[r]_ι_1(S) S
We denote _1 the full subcategory of such that
(_1)={R∈:ι_1(R):1(R)→ R}.
* If A is a 𝔽_2-algebra, then 𝕋(A) ∈ obj(_1).
* If F is an hyperbolic hyperfield, then k_*(F) ∈ obj(_1).
* If F is a special hyperfield (equivalently, G = F ∖{0} is a special group), then the graduate Witt ring of F (definition <ref>) W_*(F) ∈ obj(_1).
* If F is a field with char(F) ≠ 2, then, by a known result of Vladimir Voevodski,
ℋ^*(Gal(F^s|F), {± 1}) ∈ obj(_1).
* For each R∈ we have that ι_1(1(R)):1(1(R))→1(R) is the identity arrow.
* 1∘1=1.
* The functor 1:→_1 is the right adjoint of the inclusion functor j_1:_1→.
* j_1:_1→ creates inductive limits and to obtain the projective limits in _1 is sufficient restrict the projective limits obtained in :
_i∈ IR_i≅(_i∈ Ij_1(R_i))_1_i∈ Ij_1(R_i).
Similar to Proposition <ref>.
REVER!!!!
Now, we have an useful model-theoretic tool for _1:
Let ℛ be Igr (as in third version <ref>) such that ℛ∈_1. Consider x=(x_1,...,x_r) and
φ(x) the quasi-identity
[p_1(x)=q_1(x)∧...∧ p_t(x)=q_t(x)]→ p(x)=q(x),
where p,q,p_i,q_i (i=1,...,n) are terms in ℒ_igr of sort 1.
R_1φ(x)R_nφ[R_n](y)
for all n≥1. In particular, ℛφ[ℛ](y).
Firstly, we suppose R_1 t(x_1,...,x_r)=s(x_1,...,x_r) for terms t,s of sort 1. This means that for all a∈ R^ω_1,
holds t^ℛ(a)=s^ℛ(a). Now, for each k=1,...,r, we change x_k for the term τ(y_k)
defined by
τ(y_k):=∑^s_k_j=1y^k_0j∗_11y^k_1j, s_k.
Hence, from t^ℛ(a)=s^ℛ(a) we conclude
t^ℛ(x_1|τ_1,...,x_r|τ_r)(b)=s^ℛ(x_1|τ_1,...,x_r|τ_r)(b)
for all b∈ R^ω_2, which proves that R_2 t[R_2](y_1,...,y_r)=s[R_2](y_1,...,y_r). Using induction on n we conclude
that R_n t[R_n](y_1,...,y_r)=s[R_n](y_1,...,y_r) for all n≥1.
Using the particular case above and induction on t, we prove that if R_1φ(x) then R_nφ[R_n](y) for all n≥1. The fact of ℛφ[ℛ](y) follows as corollary of this after the addition of
(adequate) variables.
Now we define the second subcategory. We define the quotient graded ring functor
𝒬:→
as follow: for a object R=((R_n)_n≥0,(h_n)_n≥0,∗_nm), 𝒬(R):=R/T, where T=(T_n)_n≥0 is the ideal generated by {(⊤_1+_1a)∗_11a∈ R_2:a∈ R_1}. More explicit,
* T_0:={0_0}⊆ R_0,
* T_1:={0_1}⊆ R_1,
* for n≥2, T_n⊆ R_n is the pointed 𝔽_2-submodule generated by
{x∈ R_n:x =y_l∗_l1(⊤_1+_1a_1)∗_11a_1∗_1rz_r,
a_1∈ R_1, y_l∈ R_l, z_r∈ R_r, l+r=n-2}.
Of course, 𝒬(R) provides a projection π_R:R→𝒬(R) in the obvious way.
On the morphisms, for f∈(R,S), we define 𝒬(f)∈(𝒬(R),𝒬(S)) by the only Igr-morphisms that makes the following diagram commute:
@!=6pcR[r]^π_R[d]_f 𝒬(R)[d]^𝒬(f)
S[r]_π_S 𝒬(S)
We denote _h the full subcategory of such that
(_h)={R∈:π_R:R→𝒬(R)}.
Note that R ∈ obj(Igr_h) iff for each a ∈ R_1, a ∗_11⊤_1 = a ∗_11 a ∈ R_2. Each R satisfying this condition is, in some sense, “hyperbolic” (see Proposition <ref>): this is the motivation of the index “h”.
* Let A be a 𝔽_2-algebra. Then 𝕋(A) ∈ obj(_h) iff A is a boolean ring (i.e., ∀ a ∈ A, a^2 = a).
* If F is an hyperbolic hyperfield, then k_*(F) ∈ obj(_h).
* If F is a special hyperfield (equivalently, G = F ∖{0} is a special group), then W_*(F) ∈ obj(_h).
* If F is a field with char(F) ≠ 2, then ℋ^*(Gal(F^s|F), {± 1}) ∈ obj(_h).
* For each R∈ we have that π_𝒬(R):𝒬(R)→𝒬(𝒬(R)) is an isomorphism.
* 𝒬∘𝒬=𝒬.
* The functor 𝒬:→_h is the left adjoint of the inclusion functor j_q:_𝒬→.
* j_q:_h→ creates projective limits and to obtain the inductive limits in _h is sufficient restrict the inductive limits obtained in :
_i∈ Ij_q(R_i)(_i∈ Ij_q(R_i))_𝒬≅_i∈ IR_i.
Moreover, j_q:_h→ creates filtered inductive limits and quotients by graded ideals.
Are examples of inductive graded rings in Igr_+: (i) 𝕋(A), where A is a boolean ring; (ii) k_*(F), where F is an hyperbolic hyperfield; (iii) W_*(F), where F is an special hyperfield; (iv) ℋ^*(Gal(F^s|F), {± 1}), where F is a field with char(F) ≠ 2.
We denote by _+ the full subcategory of such that
(_+)=(_1)∩(_h).
We denote by j_+:_+→ the inclusion functor.
* Note that the notion of an Igr, R, be in the subcategory Igr_h can be axiomatized by a first-order (finitary) sentence in L, the polysorted language for Igr's described in the previous Chapter: (∀ a:1 , a∗_11a = ⊤_1 ∗_11 a). On the other hand, the concepts R ∈_1 and R ∈ Igr_+ are axiomatized by L_ω_1,ω-sentences.
* Note that the subcategory Igr_+ ↪ Igr is closed by filtered inductive limits.
In order to think of an object in _+ as a graded ring of "K-theoretic type", we make the following convention.
Let R∈_+ and write R_1 multiplicatively by (Γ(R),·,1,-1), i.e, fix an isomorphism e_R:R_1→Γ(R) in order that e_R(⊤)=-1 and e_R(a+b)=a· b. Such isomorphism e_R is called exponential of R and l_R=e_R^-1 is called logarithm of R. In this sense, we can write R_1={l(a):a∈Γ(R)}. We also denote l(a)∗_11l(b) simply by l(a)l(b), a,b∈Γ(R). We drop the superscript and write just e,l when the context allows it.
Using Definitions <ref>, <ref> (and of course, Definitions <ref> and <ref> with an argument similar to the used in Lemma <ref>) we have the following properties.
Let R∈_+.
* l(1)=0.
* For all n≥1, η∈ R_n is generated by l(a_1)...l(a_n) with a_1,...,a_n∈Γ(R).
* l(a)l(-a)=0 and l(a)l(a)=l(-1)l(a) for all a∈Γ(R).
* l(a)l(b)=l(b)l(a) for all a,b∈Γ(R).
* For every a_1,...,a_n∈Γ(R) and every permutation σ∈ S_n,
l(a_1)...l(a_i)...l(a_n)=(σ)l(a_σ 1)...l(a_σ n)R_n.
* For all ξ∈ R_n, η∈ R_n,
ξη=ηξ.
* For all n≥1,
h_n(l(a_1)...l(a_n))=l(-1)l(a_1)...l(a_n).
Let R∈_+
* For each n ∈ℕ and each x∈ R_n, x∗_n,n x = ⊤_n ∗_n,n x ∈ R_2n.
* 𝔸(R) = _n ∈ℕ R_n is a boolean ring (or, equivalently, 𝕋(𝔸(R)) ∈_+).
* The property is clear if n =0. If n ≥ 1, then the property can be verified by induction on the number of generators k ≥ 1, x = ∑_i =1^k a_1,i∗_1,1 a_2,i∗_1,1⋯∗_1,1a_n,i∈ R_n: if k =1, then note that
x∗_n, n x = (a_1∗ a_2∗⋯∗ a_n) ∗ (a_1∗ a_2∗⋯∗ a_n)
= (a_1∗ a_1) ∗ (a_2 ∗ a_2) ∗⋯ (a_n ∗ a_n)=(⊤_1 ∗ a_1) ∗ (⊤_1 ∗ a_2) ∗⋯∗ (⊤_1 ∗ a_n)
= (⊤_n) ∗ (a_1∗ a_2∗⋯∗ a_n);
if k > 1, write x = y + z, where y, z ∈ R_n are have <k generator and then, by induction,
x ∗_n, n x = (y+z) ∗_n,n(y+z) = y∗_n,n y + y∗_n,n z + z∗_n,n y + z∗_n,n z
= y∗_n,n y + z∗_n,n z= ⊤_n ∗_n,n y + ⊤_n ∗_n,n z
= ⊤_n ∗_n,n(y+z) = ⊤_n ∗_n,n x
* This follows directly from item (i) and the definition of the ring structure in 𝔸(R) = _n ∈ℕ R_n.
By the previous Proposition and the universal property of the boolean hull of an Igr (Definition <ref>.(ii)), we obtain:
Let R∈_+. Then:
* X_𝕋(𝔸(R))≈ X_R.
* 𝔸(R) ≅ B(R).
* Given R∈_1, S∈ and f:S→ j_1(R), we have:
f is coordinatewise surjective iff f_1:S_1→ R_1 is a surjective morphism of pointed 𝔽_2-modules.
* Given R∈_1, S∈ and f,h∈(j_1(R),S), we have f=h if and only if
f_1=h_1.
Let R,S∈. The inclusion function ι_R: 1(R)→ R and projection function π_R:R→𝒬(R) induces respective natural transformations ι:1⇒1_Igr and π:1_Igr⇒𝒬. Moreover, we have a natural transformation can:𝒬1⇒1𝒬 given by the rule can_n(l(a_1)...l(a_n)):=l(a_1)...l(a_n), n≥1. (can_n is well defined and is an isomorphism basically because both 𝒬1(R) and 1𝒬(R) are generated in level 1 by R_1 and both graded rings satisfies the relation l(a)l(-a)=0).
We have another immediate consequence of the previous results (and adjunctions):
* For all R∈_h, 1(R)∈_+ and _R is an isomorphism.
* For all R∈_1, 𝒬(R)∈_+ and _R is an isomorphism.
* To get projective limits in _+ is enough to restrict the projective limits obtained in :
_i∈ IR_i≅1(_i∈ Ij_+(R_i)).
* To get inductive limits in _+ is enough to restrict the inductive limits obtained in :
_i∈ IR_i≅𝒬(_i∈ Ij_+(R_i)).
§ EXAMPLES AND CONSTRUCTIONS OF QUADRATIC INTEREST
A filtered ring is a tuple A=(A,(J_n)_n≥0,+,·,0,1) where:
* (A,+,·,0,1) is a commutative ring with unit.
* J_0=A and for all n≥1, J_n⊆ A is an ideal.
* For all n,m≥0, n≤ m⇒ J_n⊇ J_m.
* For all n,m≥0, J_n· J_m⊆ J_n+m.
* J_0/J_1≅𝔽_2 (then 2=1+1∈ J_1).
* For all n≥0, J_n/J_n+1 is a group of exponent 2 (then 2· J_n⊆ J_n+1 and 2^n∈ J_n).
A morphism f:A→ A' of filtered rings is a ring homomorphism such that f(J_n)⊆ J'_n. The category of filtered rings will be denoted by .
We define the inductive graded ring associated functor
Grad:→
for f:(A,B) as follow: Grad(A):=((Grad(A)_n)_n≥0,(t_n)_n≥0,∗)∈ is the igr where
* For all n≥0, Grad(A)_n:=(J_n/J_n+1,+_n,0_n,⊤_n) is the exponent 2 group with distinguished element ⊤_n:=2^n+J_n+1.
* For all n≥0, t_n:Grad(A)_n→ Grad(A)_n+1 is defined by t_n:=2·_, i.e,
a+J_n+1∈ J_n/J_n+1, t_n(a+J_n+1):=2· a+J_n+2∈ J_n+1/J_n+2.
Observe that t_n(⊤_n)=⊤_n+1, i.e, t_n is a
morphism of pointed 𝔽_2-modules.
* For all n,m≥0 the biadditive function ∗_nm:Grad(A)_n× Grad(A)_m→ Grad(A)_n+m is defined by the rule
(a_n+J_n+1)∗_mn(b_m+J_m+1)=a_n· b_m+J_n+m+1∈ J_n+m/J_n+m+1.
The group A_g:=⊕_n≥0Grad(A)_n of exponent 2 and the induced application ∗:A_g× A_g→ A_g are such that (A_g,∗) is a commutative ring with unit ⊤_1=(2+J_2)∈ J_1/J_2.
* For all n≥1, t_n=⊤_1∗_1n_.
The morphism Grad(f)∈(Grad(A),Grad(A')) is defined by the following rules: for all n≥0, f_n:Grad(A)_n→ Grad(A')_n is given by
f_n(a+J_n+1):=f_n(a)+J'_n+1.
Note that f_n a homomorphism of 𝔽_2-pointed modules and ⊕_n≥0f_n:(A_g,∗)→(A'_g,∗) is a homomorphism of graded rings with unit.
The functor of graded ring of continuous functions over a space X
𝒞(X,_):→
is the functor defined for f:R→ S by
* 𝒞(X,R)_0:=R_0≅𝔽_2,
* for all n≥1, 𝒞(X,R)_n:=𝒞(X,R_n) as a pointed 𝔽_2-module,
* for all n,m≥0, ∗^X_nm:𝒞(X,R_n)×𝒞(X,R_m)→𝒞(X,R_n+m) is given by (α_n,β_m)↦α_n∗^X_nmβ_m, where for x∈ X,
α_n∗^X_nmβ_m(x)=α_n(x)∗_nmβ_m(x)∈ R_n+m.
* 𝒞(X,f)_0:=f_0 as an homomorphism of pointed 𝔽_2-modules R_0 → S_0.
* for all n≥1, 𝒞(X,f)_n:=𝒞(X,f_n):=f_n∘_ ∈ p𝔽_2-mod( 𝒞(X,R_n), 𝒞(X,S_n)).
Let X be a topological space and let R∈_1. Note that if X is compact or R ∈ Igr_fin, then 𝒞(X,R)∈_1.
We define the continuous function filtered ring functor
𝒞:SG→
as follow: first, consider the functor 𝒞(X__,ℤ):SG→, composition of the (contravariant) functors “associated ordering space” X__:SG→^op and “continuous functions in ℤ ring” 𝒞(_,ℤ):^op→ (here ℤ is endowed with the discrete topology).
Now we define the functor 𝒞:SG→: given a special group G∈ SG, we define
𝒞(G):=(R(G),(J_n(G))_n≥0,+,·,0,1)
where
* (R(G),+,·,0,1) is the subring of 𝒞(X_G,ℤ) of continuous functions of constant parity, i.e, R(G):=J_0(G)𝒞(X_G,ℤ) is the image of the monomorphism of rings with unit
j_0(G):𝒞(X_G,2ℤ)∪𝒞(X_G,2ℤ+1)→𝒞(X_G,ℤ).
* For all n≥1, J_n(G)J_0(G) is the ideal of R(G) (and also of 𝒞(X_G,ℤ)) that is the image of the monomorphism of abelian groups
j_n(G):𝒞(X_G,2^nℤ)→𝒞(X_G,2ℤ)∪𝒞(X_G,2ℤ+1).
We also have J_0(G)/J_1(G)≅𝔽_2 and for all n,m≥0:
* If n≥ m then J_n(G)⊇ J_m(G);
* J_n(G)· J_m(G)⊆ J_n+m(G);
* 2J_n(G)=J_n+1(G)⇒ J_n(G)/J_n+1(G) is an exponent 2 group.
On the morphisms, for f∈ SG(G,G'), we define 𝒞(f)∈(𝒞(G),𝒞(G')) by
𝒞(f)(h)=𝒞(X_f,ℤ)(h)
for h∈𝒞(G). 𝒞(f) is well-defined because 𝒞(f)∈(𝒞(G),𝒞(G')) and for all n≥0,
𝒞(f)(J_n(G))⊆ J_n(G').
We define the continuous function graded ring functor by
Grad∘𝒞:SG→.
For convenience, we describe this functor now: given G∈,
Grad(𝒞(G)):=((Grad(𝒞(G))_n)_n≥0,(t_n)_n≥0,·)
where:
* Grad(𝒞(G))_n:=(J_n(G)/J_n+1(G),·,0· J_n+1(G),2^nJ_n+1(G)), where 2∈𝒞(X_G,ℤ) is the constant function of value 2∈2ℤ⊆ℤ.
* For all n≥0, J_n(G)/J_n+1(G)J_n+1(G)/J_n+2(G).
* For all n,m≥0, ∗_nm:J_n(G)/J_n+1(G)× J_m(G)/J_m+1(G)→ J_n+m(G)/J_n+m+1(G) is given by
(h_n+J_n+1(G))∗_nm(k_m+J_m+1(G))=h_nk_m+J_n+m+1(G).
On the morphisms, given f∈ SG(G,G'), we have that
Grad(𝒞(f))=(Grad(𝒞(f))_n)_n≥0∈(Grad(𝒞(G), Grad(𝒞(G')),
where for all n≥0, Grad(𝒞(f))_n:Grad(𝒞(G))_n→ Grad(𝒞(G'))_n is such that
Grad(𝒞(f))_n(h+J_n+1(G))=𝒞(f)(h)+J'_n+1(G').
* There is a natural isomorphism θ:Grad∘𝒞𝕋∘𝒞(X__,𝔽_2). In particular, for all G∈ SG, Grad(𝒞(G))∈_+.
* For all 0< n≤ m<ω, 2^m-n·_:J_n(G)/J_n+1(G)→ J_m/J_m+1(G) is an isomorphism of groups of exponent 2.
* For all n≥1, there is an isomorphism of groups of exponent 2
θ_n(G):J_n(G)/J_n+1(G)𝒞(X_G,𝔽_2),
given by the rule
θ_n(h+J_n(G))(σ):=h_n(σ)/2^n∈𝒞(X_G,ℤ/2ℤ).
* For all 0< n≤ m<ω the following diagram commute:
@!=4pcJ_n(G)/J_n+1(G)[rr]^2^m-n·_[dr]_θ_n(G) J_m(G)/J_m+1(G)[dl]^θ_m(G)
𝒞(X_G,𝔽_2)
We define the filtered Witt ring functor
𝒲:SG→
for f∈(G,H) as follow: given a special group G∈ SG, we define
𝒲(G):=(W(G),I^n(G)_n≥0,⊕,⊗,⟨⟩,⟨1⟩)
where for all n≥0, I^n(G) is the n-th power of the fundamental ideal
I(G):={φ∈ W(G):_2(φ)=0}.
We define 𝒲(f)∈(𝒲(G),𝒲(H)) by the rule 𝒲(f)(φ):=f⋆φ.
𝒲(G) is a filtered commutative ring with unit because:
* (W(G),⊕,⊗,⟨⟩,⟨1⟩)∈.
* For all n≥0, I^n(G)⊆ W(G) is an ideal.
* For all n,m≥0, n≤ m⇒ I^n(G)⊇ I^m(G).
* For all n,m≥0, I^n(G)⊗ I(G)⊆ I^n+m(G).
* I^0(G):=W(G).
* I^0(G)/I^1(G)≅𝔽_2.
* For all n≥0, (I^n(G)/I^n+1(G),⊕,⟨⟩) is a group of exponent 2 with distinguished element 2^n + I^n+1(G), where 2^n = ⊗_i<n⟨ 1, 1 ⟩.
We define the graded Witt ring functor
∘𝒲:SG→.
We register, again, the following result:
For each G∈ SG we have (𝒲(G))∈_+.
For each commutative ring with unit A, we have
t(A)={a∈ A:n≥0n· a=0}⊆ A
is an ideal (the torsion ideal of A). The association A↦ A/t(A) is the component on the objects of an endofunctor of .
For each G∈ SG we have a ring homomorphism with unit _G:W(G)→𝒞(X_G,ℤ) given by the rule
_G(⟨ a_0,...,a_n-1⟩)(σ):=∑^n-1_i=0σ(a_i).
The Pfister's Local-Global principle says that _G induces a monomorphism
_G:W(G)/t(W(G))→𝒞(X_G,ℤ).
For each G∈ SG we have _G(W(G))⊆𝒞(X_G,2ℤ)∪𝒞(X_G,2ℤ+1) (since the signatures of classes of forms has the same parity of its dimension) and for all n≥1, _G(I^n(G))⊆𝒞(X_G,2^nℤ) (since I^n(G) is the abelian subgroup of W(G) generated by classes of Pfister forms of dimension 2^n).
:𝒲→𝒞 (respectively :𝒲/t(𝒲)→𝒞) is the natural transformation between functors
@!=3pcSG@<+.5ex>[r]^𝒲@<-.5ex>[r]_𝒞
that provide natural transformations between functors SG@<+.5ex>[r]@<-.5ex>[r]:
· :∘𝒲→∘𝒞
· :∘(𝒲/t(𝒲))→∘𝒞.
Remember that [MC] ([LC]) and [WMC] ([WLC]) are conjectures about these natural transformations.
𝒞 is a particular case of 𝒲 in the following sense: 𝒞:SG→ is naturally isomorphic to the composition of functors SGSG.
§ THE ADJUNCTION BETWEEN AND _H
By the very definition of the K-theory of hyperfields (with the notations in Theorem <ref>) we define the following functor.
With the notations of Theorem <ref> we have a functors k:ℋℳℱ→_+, k:𝒫𝒮ℳℱ→_+ induced by the reduced K-theory for hyperfields.
Now, let R∈_h. We define a hyperfield (Γ(R),+,-.·,0,1) by the following: firstly, fix an exponential isomorphism e_R:(R_1,+_1,0_1,⊤_1)→ (G(R),·,1,-1) (in agreement with Definition <ref>). This isomorphism makes, for example, an element a∗_11(⊤_1+b)∈ R_2, a,b∈ R_1 take the form (l_R(x))∗_11(l_R((-1)· y))∈ R_2, x,y∈ G(R). By an abuse of notation, we simply write l_R(x)l_R(-y)∈ R_2, x,y∈ G(R). In this sense, an element in Q_2 has the form l_R(x)l_R(-x), x∈Γ(R), and we can extend this terminology for all Q_n, n≥2 (see Definition <ref>, and Lemma <ref>).
Now, let Γ(R):=G(R)∪{0} and for a,b∈Γ(R) we define
-a :=(-1)· a,
a·0 =0· a:=0,
a+0 =0+a={a},
a+(-a) =Γ(R),
a,b0,a-b
a+b :={c∈Γ(R):d∈ G(R)
a· b=c· d∈ G(R)l_R(a)l_R(b)=l_R(c)l_R(d)∈ R_2}.
With the above rules, (Γ(R),+,-.·,0,1) is a pre-special hyperfield.
We will verify the conditions of Definition <ref>. Note that by the definition of multivalued sum once we proof that Γ(R) is an hyperfield, it will be hyperbolic. In order to prove that (Γ(R),+,-.·,0,1) is a multigroup we follow the steps below. Here we use freely the properties in Lemma <ref>.
* Commutativity and (a∈ b+0)⇔(a=b) are direct consequence of the definition of multivaluated sum and the fact that l_R(a)l_R(b)=l_R(b)l_R(a).
* We will prove that if c∈ a+b, then a∈ c-b and b∈ c-a.
If a=0 (or b=0) or a=-b, then c∈ a+b means c=a or c∈ a-a. In both cases we get a∈ c-b and b∈ c-a.
Now suppose a,b0 with a-b. Let c∈ a+b. Then a· b=c· d and l_R(a)l_R(b)=l_R(c)l_R(d)∈ R_2 for some d∈ G(R). Since G(R) is a multiplicative group of exponent 2, we have a· d=b· c (and hence a·(-d)=c·(-b)). Note that
l_R(a)l_R(-d) =l_R(a)l_R(-abc)=l_R(a)l_R(bc)=l_R(a)l_R(b)+l_R(a)l_R(c)
=l_R(c)l_R(d)+l_R(a)l_R(c)=l_R(c)l_R(d)+l_R(c)l_R(a)=l_R(c)l_R(ad).
Similarly,
l_R(b)l_R(-c) =l_R(b)l_R(-abd)=l_R(b)l_R(ad)=l_R(b)l_R(a)+l_R(b)l_R(d)
=l_R(a)l_R(b)+l_R(b)l_R(d)=l_R(c)l_R(d)+l_R(b)l_R(d)
=l_R(bc)l_R(d)=l_R(ad)l_R(d).
Then
l_R(a)l_R(-d)-l_R(b)l_R(-c) =l_R(c)l_R(ad)-l_R(ad)l_R(d)=
=l_R(c)l_R(ad)-l_R(d)l_R(ad)=l_R(-cd)l_R(ad).
But
l_R(-cd)l_R(ad) =l_R(-cd)l_R(a)+l_R(-cd)l_R(d)=
=l_R(-cd)l_R(a)+l_R(c)l_R(d)=l_R(a)l_R(-cd)+l_R(a)l_R(b)
=l_R(a)l_R(-bcd)=l_R(a)l_R(-a)=0.
Then
l_R(a)l_R(-d)=l_R(b)l_R(-c),
proving that a∈ b-c. Similarly we prove that b∈ -c+a.
* Since (G(R),·,1) is an abelian group, we conclude that (Γ(R),·,1) is a commutative monoid. Beyond this, every nonzero element a∈Γ(R) is such that a^2=1.
* a·0=0 for all a∈Γ(R) is direct from definition.
* For the distributive property, let a,b,d∈Γ(R) and consider x∈ d(a+b). We need to prove that
*x∈ d· a+d· b.
It is the case if 0∈{a,b,d} or if b=-a. Now suppose a,b,d0 with b-a. Then there exist y∈ G(R) such that x=dy and y∈ a+b. Moreover, there exist some z∈ G(R) such that y· z=a· b and l_R(y)l_R(z)=l_R(a)l_R(b).
If 0∈{a,b,d} or if b=-a there is nothing to prove. Now suppose a,b,d0 with b-a. Therefore (dy)·(dz)=(da)·(db) and
l_R(dy)l_R(dz) =l_R(d)l_R(d)+l_R(d)l_R(z)+l_R(d)l_R(y)+l_R(y)l_R(z)
=l_R(d)l_R(d)+l_R(d)[l_R(z)+l_R(y)]+l_R(y)l_R(z)
=l_R(d)l_R(d)+l_R(d)l_R(yz)+l_R(y)l_R(z)
=l_R(d)l_R(d)+l_R(d)l_R(ab)+l_R(a)l_R(b)
=l_R(d)l_R(d)+l_R(d)l_R(a)+l_R(d)l_R(b)+l_R(a)l_R(b)
=l_R(da)l_R(db),
so l_R(dy)l_R(dz)=l_R(da)l_R(db). Hence we have x=dy∈ d· a+d· b.
* Using distributivity we have that for all a,b,c,d∈Γ(R)
d[(a+b)+c]=(da+db)+dcd[a+(b+c)]=da+(db+dc).
In fact, if x∈ (a+b)+c, then x∈ y+c for y∈ a+b. Hence
dx∈ dy+dc⊆ d(a+b)+dc=(da+db)+dc.
Conversely, if z∈(da+db)+dc, then z=w+dc, for some w∈ da+db=d(a+b). But in this case, w=dt for some t∈ a+b. Then
z∈ dt+dc=d[t+c]⊆ d[(a+b)+c].
Similarly we prove that d[a+(b+c)]=da+(db+dc).
* Let a∈Γ(R) and x,y∈1-a. If a=0 or a=1 then we automatically have x· y∈ 1-a, so let a0 and a1. Then x,y∈ G(R) and there exist p,q∈Γ(R) such that
x· p=1· a l_R(x)l_R(p)=l_R(1)l_R(a)=0
y· q=1· a l_R(y)l_R(q)=l_R(1)l_R(a)=0.
Then (xy)·(pqa)=1· a and
l_R(xy)l_R(pqa)
=l_R(xy)l_R(p)+l_R(xy)l_R(q)+l_R(xy)l_R(a)
=l_R(y)l_R(p)+l_R(x)l_R(q)+l_R(x)l_R(a)+l_R(y)l_R(a)
=l_R(y)l_R(pa)+l_R(x)l_R(qa)
=l_R(y)l_R(x)+l_R(x)l_R(y)=0.
Then xy∈1-a, proving that (1-a)(1-a)⊆(1-a). In particular, since 1∈1-a, we have (1-a)(1-a)=(1-a).
* Finally, to prove associativity, we use Theorem <ref>. Let ⟨ a,b⟩≡⟨ c,d⟩ the relation defined for a,b,c,d∈Γ(R)∖{0} by
⟨ a,b⟩≡⟨ c,d⟩ab=cdl_R(a)l_R(b)=l_R(c)l_R(d).
For 0∉{a,b,c,d}, a-b and ab=cd, we have
a+b=c+d⟨ a,b⟩≡⟨ c,d⟩.
Using items (i)-(vii) we get that (Γ(R)∖{0},≡,1,-1) is a pre-special group. Then by Theorem <ref> we have that M(Γ(R)∖{0})≅Γ(R) is a pre-special hyperfield, and in particular, (Γ(R) is associative.
With the notations of Proposition <ref> we have a functor Γ:_+→ defined by the following rules: for R∈_+, Γ(R) is the special hyperfield obtained in Proposition <ref> and for f∈_+(R,S), Γ(f):Γ(R)→Γ(S) is the unique morphism such that the following diagram commute
@!=4.5pcR[d]_f_1[r]^e_R Γ(R)@.>[d]^Γ(f)
S[r]_e_S Γ(S)
In other words, for x∈ R we have
Γ(f)(x)=(e_S∘ f_1∘ l_R)(x)=e_S(f_1(l_R(x))).
The functor k:𝒫𝒮ℳℱ→_+ is the left adjoint of Γ:_+→𝒫𝒮ℳℱ. The unity of the adjoint is the natural transformation ϕ:1_𝒫𝒮ℳℱ→Γ∘ k defined for F∈𝒫𝒮ℳℱ by ϕ_F=e_k(F)∘ρ_F.
We show that for all f∈𝒫𝒮ℳℱ(F,Γ(R)) there is an unique f^♯:_+(k(F),R) such that Γ(f^♯)∘ϕ_F=f. Note that ϕ_F=e_k(F)∘ρ_F is a group isomorphism (because e_k(F) and ρ_F are group isomorphisms).
Let f^♯_0:1_𝔽_2:𝔽_2→𝔽_2 and
f^♯_1:=l_R∘ f∘(ϕ_F)^-1∘ e_k(F):k_1(F)→ R_1. For n≥2, define h_n:∏^n_i=1k_1(F)→ R_n by the rule
h_n(ρ(a_1),...,ρ(a_n)):=l_R(f(a_1))∗...∗ l_R(f(a_n)).
We have that h_n is multilinear and by the Universal Property of tensor products we have an induced morphism ⊗^n_i=1k_n(F)→ R_n defined on the generators by
h_n(ρ(a_1)⊗...⊗ρ(a_n)):=l_R(f(a_1))∗...∗ l_R(f(a_n)).
Now let η∈ Q_n(F). Suppose without loss of generalities that η=ρ(a_1)⊗...⊗ρ(a_n) with a_1∈1-a_2. Then f(a_1)∈1-f(a_2) which imply l_R(f(a_1))∈1-l_R(f(a_2)). Since R_n∈_+,
h_n(η):=h_n(ρ(a_1)⊗...⊗ρ(a_n))
=l_R(f(a_1))∗...∗ l_R(f(a_n))=0∈ R_n.
Then h_n factors through Q_n, and we have an induced morphism h_n:k_n(F)→ R_n. We set f^♯_n:=h_n. In other words, f^♯_n is defined on the generators by
f^♯_n(ρ(a_1)...ρ(a_n)):=l_R(f(a_1))∗...∗ l_R(f(a_n).
Finally, we have
Γ(f^♯)∘ϕ_F
=[e_R∘(f^♯_1)∘ e^-1_k(F)]∘[e_k(F)∘ρ_F]
=e_R∘(f^♯_1)∘ρ_F
=e_R∘[l_R∘ f∘(ϕ_F)^-1∘ e_k(F)]∘ρ_F
=f∘(ϕ_F)^-1∘[e_k(F)∘ρ_F]
=f∘(ϕ_F)^-1∘ϕ_F=f.
For the unicity, let u,v∈_+(k(F),R) such that Γ(u)∘ϕ_F=Γ(v)∘ϕ_F. Since ϕ_F is an isomorphism we have u_1=v_1 and since k(F)∈_+ we have u=v.
As we have already seen in Theorem <ref>, there natural transformation ϕ_F:F→Γ(k(F)) is a group isomorphism. Now let a,c,d∈ F with a∈ c+d. Then ϕ_F(a)∈ϕ_F(c)+ϕ_F(d), i.e, ϕ_F is a morphism of hyperfields. In fact, if 0∈{a,c,d} there is nothing to prove. Let 0∉{a,c,d}. To prove that ϕ_F(a)∈ϕ_F(c)+ϕ_F(d) we need to show that ρ_F(a)ρ_F(acd)=ρ_F(c)ρ_F(d). In fact, from a∈ c+d we get ac∈1+ad, and then ρ_F(ac)ρ_F(ad)=0. Moreover
ρ_F(a)ρ_F(acd)+ρ_F(c)ρ_F(d)
=ρ_F(a)ρ_F(acd)+ρ_F(c)ρ_F(d)+ρ_F(ac)ρ_F(ad)
=ρ_F(a)ρ_F(ac)+ρ_F(a)ρ_F(d)+ρ_F(c)ρ_F(d)+ρ_F(ac)ρ_F(ad)
=[ρ_F(a)ρ_F(ac)+ρ_F(ac)ρ_F(ad)]+[ρ_F(a)ρ_F(d)+ρ_F(c)ρ_F(d)]
=ρ_F(d)ρ_F(ac)+ρ_F(d)ρ_F(ac)=0,
proving that ϕ_F(a)∈ϕ_F(c)+ϕ_F(d). Unfortunately we do not now if or where ϕ_F is a strong morphism. Then we propose the following definition.
Let F be a pre-special hyperfield. We say that F is k-stable if ϕ_F:F→Γ(F(G)) is a strong morphism. Alternatively, F is k-stable if for all a,b,c,d∈Ḟ, if ab=cd then
ρ_F(a)ρ_f(b)=ρ_F(c)ρ_F(d)ac∈1+cd.
Every PSG G has a k-stable hull G_(k) that satisfies the corresponding universal property . This is just given by
G_(k) =_n ∈ℕ (Γ∘ k)^n (G).
Thus the inclusion functor PSG_(k)↪ PSG has a left adjoint (k) : PSG → PSG_(k).
We emphasize that if G is AP(3) special group, then G is k-stable. In particular, every reduced special group is k-stable, and if F is a field of characteristic not 2, then G(F) is also k-stable.
In the next Chapter, it is established the
Arason-Pfister Hauptsatz (Theorem <ref>) for every special group G, (i.e., G satisfies AP(n) for each n ∈ℕ.)
* For each G∈ SG, Γ(s_G):Γ(𝒦(G))→Γ((𝒲(G))) is a PSG-isomorphism.
* For each G∈ℛ𝒮𝒢, κ_G:G→Γ(𝒦(G)) is a PSG-isomorphism.
* For each G∈ℛ𝒮𝒢, ω_G:G→Γ((𝒲(G))) is a PSG-isomorphism.
Let G be a PSG. Are equivalent:
* G∈𝒫𝒮𝒢_fin.
* 𝒦(G)∈_fin.
Let G be a SG. Are equivalent:
* G∈ SG_fin.
* 𝒦(G)∈_fin.
* (∘𝒲)(G)∈_fin.
The canonical arrow
can:_i∈ I𝒦(G_i)→𝒦(_i∈ IG_i)
is an _+-isomorphism as long as the I-colimits above exists.
The canonical arrow
can:𝒦(_i∈ IG_i)→_i∈ I𝒦(G_i)
is an _+-morphism pointwise surjective, as long as the I-colimits above exists.
In <cit.> there is an interesting analysis identifying the boolean hull of a special group G (or special hyperfield F = G ∪{0}) with the boolean hull of the inductive graded rings k_*(F), W_*(F) ∈ Igr_+ (see the above Corollary <ref>). It could be interesting to compare the space of orderings of R ∈ Igr_h and of Γ(R) ∈𝒫𝒮ℳℱ.
§ IGR AND MARSHALL'S CONJECTURE
Using the Boolean hull functor, M. Dickmann and F. Miraglia provide an encoding of Marshall's signature conjecture ([MC]) for reduced special groups by the condition
⟨ 1, 1 ⟩⊗ - : I^n(G)/I^n+1(G) → I^n+1(G)/I^n+2(G)
to be injective, for each n ∈ℕ. In fact they introduce the notion of a [SMC] reduced special group:
l(-1) ⊗ - : k_n(G) → k_n+1(G)
is injective, for each n ∈ℕ. They establish that, [SMC] imply [MC], for every reduced special group G. Moreover (see 5.1 and 5.4 in <cit.>):
* The inductive limit of [SMC] groups is [SMC].
* The finite product of [SMC] groups is [SMC].
* G(F) is [SMC], for every Pythagorean field F (with (char(F) ≠ 2).
* s:k→∘𝒲 is a “surjective” natural transformation, where for each G∈ SG and all n≥1, s_n(G):K_n(G)→ I^n(G)/I^n+1(G) is given by the rule
s_n(G)(∑^s-1_i=0l(g_1,i)⊗...⊗ l(g_n,i)+𝒬_n(G)):=
⊗^s-1_i=0[⟨1,-g_1,i⟩]⊗...⊗[⟨1,-g_n,i⟩]⊗I^n+1(G).
* r:∘𝒲→ k is a natural transformation, where for each G∈ SG and all n≥1,
r^n_G:I^n(G)/I^n+2(G)→ k_2n-1(G) is given by the rule
r_n(G)(⊗^s-1_i=0[⟨1,-g_1,i⟩]⊗...⊗[⟨1,-g_n,i⟩]⊗I^n+1(G)):=
∑^s-1_i=0l(-1)^2^n-1-nl(g_1,i)⊗...⊗ l(g_n,i)+𝒬_2n-1(G)
* For all n≥1, r_n(G)∘ s_n(G)=l(-1)^2^n-1-n⊗_.
* We have an isomorphism of pointed 𝔽_2-modules: s^1_G:k_1(G)I^1(G)/I^2(G),
s^2_G:k_2(G)I^2(G)/I^3(G).
* If G is [SMC] Then s_G:k(G)→∘𝒲(G) is an isomorphism.
[K-theory of Products]
Let {F_i}_i∈ I be a family of SMC pre-special hyperfields. If ∏^h_i∈ IF_i is also SMC then
k(∏^h_i∈ IF_i)≅∏_i∈ Ik(F_i).
Let F:=∏^h_i∈ IF_i and p_i:F→ F_i the canonical projection (i∈ I). Note that p_i is a morphism with p_i(1)=1. By Theorem <ref>, for each i∈ I we have a surjective morphism p_i:k(F)→ k(F_i) satisfying all the conditions (a),(b),(c) stated in the Theorem.
Now, for each i∈ I, let π_i:∏_i∈ Ik(F_i)→ k(F_i) be the canonical projections. By the Universal Property of Product, there is an unique morphism Φ:k(F)→∏_i∈ Ik(F_i) such that for all i∈ I π_i∘Φ=p_i. The morphism Φ is given by the rule: for n≥1, Φ_n:k_n(F)→∏_i∈ Ik_1(F_i) is given by
Φ_n(η):=((p_i)_n(η))_i∈ I.
Since each (p_i)_n is surjective, we have Φ_n surjective.
The theorem will be proved after we show two facts:
* The morphism Φ_1:k_1(F)→∏_i∈ Ik_1(F_i) is injective (and therefore, is an isomorphism).
* for all i∈ I and all n≥1, the following diagram commute
@!=6pck_nF[d]_Φ_n[r]^ω^-1_n k_n+1F[d]^Φ_n+1
∏_i∈ Ik_n(F_i)[r]_ω^-1_n ∏_i∈ Ik_n+1(F_i)
where ω^-1_n:∏_i∈ Ik_n(F_i)→∏_i∈ Ik_n+1(F_i) is the (injective) morphism given by the rule
ω^-1_n(η_i)_i∈ I:=(ω^-1_n(η_i))_i∈ I=
(ρ(-1)·η_i)_i∈ I.
Once established (1) and (2), by induction we get Φ_n injective for all n≥1, which proves that Φ is an isomorphism.
* The expression for Φ_1 on the generators is
Φ_1(ρ((a_i)_i∈ I)):=(ρ((p_i)_1((a_1i)_i∈ I)))_i∈ I=(ρ(a_i))_i∈ I.
Using this expression we get the injectivity of Φ_1.
* By item (c) of Theorem <ref>, for each i∈ I we have a commutative diagram
@!=6pck_nF[d]_(p_i)_n[r]^ω^-1_n k_n+1F[d]^(p_i)_n+1
k_n(F_i)[r]_ω^-1_n k_n+1(F_i)
By the Universal Property of Products and the unicity of Φ we "lift" the above commutative diagram to the desired commutative diagram
@!=6pck_nF[d]_Φ_n[r]^ω^-1_n k_n+1F[d]^Φ_n+1
∏_i∈ Ik_n(F_i)[r]_ω^-1_n ∏_i∈ Ik_n+1(F_i)
Let {F_i}_i∈ I be a family of SMC pre-special hyperfields. Is the pre-special hyperfield ∏^h_i∈ IF_i SMC?
We intend to investigate properties of this new available K-theory for hyperfield, which has been showing a potential of unification. For example, we could start with these above natural questions:
* Are the K-theory of hyperfields (in general) closed by constructions (e.g, directed colimits and "extensions")?
* If j : F → F' is a pure morphism (of hyperfields) then k_*(j) : k_*(F) → k_*(F') is a pure morphism?
* Describe the constructions under which the class of SMC hyperfields F (i.e. × l(-1) : k_n(F) → k_n+1(F) is injective, for all natural n) are closed.
We finish this chapter considering a general setting for “Marshall's conjectures”, that includes the previous case of the Igr's W_*(F), k_*(F) for special hyperfields F.
Let R ∈ Igr_+. The ideal, nil(R), in the ring n∈ℕ⊕ R_n, formed by all of its nilpotent elements, determines N(R) a Igr-ideal of R, where
(N(R))_n := nil(R)∩R_n, ∀ n ∈ℕ. Note that, by Proposition <ref>, (nil(R))_n ={ a ∈R_n : ∃ k ∈ℕ∖{0} ( ⊤_kn∗_kn,na = 0_(k+1)n ) } , ∀ n ∈ℕ.
Let ρ : ℕ→ℕ be an increasing function and define (N_ρ(R))_n = { a ∈R_n : ∃ k ∈ℕ ( ⊤_ρ(n)∗_ρ(n),na = 0_ρ(n)+n ) } , ∀ n ∈ℕ. Then (N_ρ(R))_n is a subgroup of R_n and, since ρ(n+k) ≥ρ(n), we have (N_ρ(R))_n ∗_n,k R_k ⊆ (N_ρ(R))_n+k. Summing up, (N_ρ(R))_n)_n ∈ℕ is an Igr-ideal.
The following result is straightforward consequence of the Definitions and <ref>, <ref>.
For each R∈_+ are equivalent:
* For all n ≤ m∈ℕ, (h_nm)={0_n}∈ R_n.
* The canonical morphism R →𝕋(𝔸(R)) is pointwise injective.
* There exists a boolean ring B and a pointwise injective Igr-morphism R →𝕋(B).
Moreover, if R ∈ Igr_fin, these are equivalent to
* N(R)≅𝕋(0)∈.
Motivated by item (i), we use the abbreviation (R) to say that R satisfies one (and hence all) of the above conditions.
In the following, we fix a category of L-structures 𝒜 that is closed under directed inductive limits and a functor F_* : 𝒜→ Igr_+ be a functor that preserves directed inductive limits. Examples of such kind of functors are k_* : ℋℳℱ→ Igr_+ and W_* : ℋℳℱ→ Igr_+, since such hyperfields can be conveniently described in the first-order relational language for multirings and it is closed under directed inductive limits. Related examples are the functors k_* : SG → Igr_+ and W_* : SG → Igr_+; note that SG is a full subcategory of L_SG-Str that is closed under directed inductive limits and under arbitrary products.
If (I,≤) is an upward directed poset and Γ : (I, ≤) →𝒜 is such that: MC(F_*(Γ(i))), for all i ∈ I, then MC(F_*(_i ∈ IΓ(i))).
The hypothesis on F_* and the fact that the directed inductive limits in Igr_+ are pointwise, give us immediately that the mappings h_n : F_n(_i ∈ IΓ(i)) → F_n+1(_i ∈ IΓ(i)) are isomorphic to the injective maps _i ∈ I h_n^i : _i ∈ IF_n(Γ(i)) →_i ∈ I F_n+1(Γ(i)), for each n ∈ℕ. Therefore it holds
MC(F_*(_i ∈ IΓ(i)))
.
Let F ⊆ P(I) be a filter and let {M_i : i ∈ I} be a family of (non-empty) L-structures in 𝒜. Suppose that 𝒜 is closed under products and suppose that holds MC(F_*(∏_i ∈ J M_i)), for each J ∈ F. Then holds MC(F_*(∏_i ∈ J M_i/F)).
This follows from the preceding result since, by a well-known model-theoretic result due to D. Ellerman (<cit.>), any reduced product of a family of (non-empty) L-structures, {M_i : i ∈ I}, module a filter F ⊆ P(I), is canonically isomorphic to an upward directed inductive limit, _J ∈ F (∏_i ∈ JM_i) ≅ (∏_i ∈ IM_i)/F.
Let F_* : 𝒜→ Igr_+ preserves pure embeddings. More precisely, if M, M' ∈𝒜 and j : M → M' is a pure L-embedding, then F_*(j) : F_*(M) → F_*(M') is a pure morphism of Igr's (described in the first-order polysorted language for Igr's).
This follows from the well known characterization result:
Fact: Let L' be a first-order language and f : A → B be an L'-homomorphism. Then are equivalent
* f : A → B is a pure L'-embedding.
* There exists an elementary L'-embedding e : A → C and a L'-homomorphism h : B → C, such that e = h ∘ f.
* There exists an ultrapower A^I/U and a L'-homomorphism g : B → A^I/U, such that δ_A^(I,U) = g ∘ f, where δ_A ^(I,U) : A → A^I/U is the diagonal (elementary) L'-embedding.
Since the morphism j : M → M' is a pure embedding, by the Fact there exists an ultrapower M^I/U and a L-homomorphism g : M' → M^I/U, such that δ^M_(I,U) = g ∘ j, where δ_M^(I,U) : M → M^I/U is the diagonal (elementary) L-embedding.
Since we have a canonical isomorphism can : _J ∈ U M^J ≅→
M^I/U, applying the functor F_*, we obtain
F_*(M^I/U) ≅ F_*(_J ∈ U M^J) ≅_J ∈ U F^*(M^J) →_J ∈ U (F^*(M))^J ≅ (F_*(M))^I/U.
Keeping track, we obtain that the above morphism t : F_*(M^I/U) → (F_*(M))^I/U establishes a comparison between
F_*(δ^M_(I,U)) : F_*(M) → F_*(M^I/U) and δ^F_*(M)_(I,U)) : F_*(M) → F_*(M)^I/U
δ^F_*(M)_(I,U)) = t ∘ F_*(δ^M_(I,U)) .
Since F_*(δ^M_(I,U)) = F_*(g) ∘ F_*(j), combining the equations we obtain
δ^F_*(M)_(I,U)) = t ∘ F_*(g) ∘ F_*(j).
Applying again the Fact, we conclude that F_*(j) : F_*(M) → F_*(M') is a pure morphism of Igr's.
For each n ∈ℕ, the functor F_n : 𝒜→ p𝔽_2-mod preserves pure embeddings. More precisely, if M, M' ∈𝒜 and j : M → M' is a pure L-embedding, then F_n(j) : F_n(M) → k_n(M') is a pure morphism of pointed 𝔽_2-modules (described in the first-order single sorted language adequate). In particular F_n(j) : F_n(M) → F_n(M') is an injective morphism of pointed 𝔽_2-modules.
Let M, M' ∈𝒜 and j : M → M' is a pure L-embedding. If MC(F_*(M')), then MC(F_*(M)).
This follows directly from the previous Corollary. Indeed, suppose that holds MC(F_*(M')). Since h'_n : F_n(M') → F_n+1(M') and F_n(j) : F_n(M) → F_n(M') are injective morphisms, then, by a diagram chase, h_n : F_n(M) → F_n+1(M) is an injective morphism too, thus holds MC(F_*(M)).
@!=6pcF_nM[d]_F_n(j)[r]^h_n F_n+1M[d]^F_n+1(j)
F_n(M')[r]_h'_n F_n+1(M')
§ APPENDIX: SOME CATEGORICAL FACTS
For the reader's convenience, we provide here some categorical results concerning adjunctions. Most of them are based on <cit.>, but the reader could also consult <cit.>.
Let F:𝒜→ℬ be a functor and B an object of ℬ. A reflection of B along F is a pair (R_B,η_B) where
* R_B is an object of A and η_B:B→ F(R_B) is a morphism of ℬ.
* If A∈𝒜 is another object and b:B→ F(A) is a morphism of ℬ, there exists a unique morphism a:R_B→ A in 𝒜 such that F(a)∘η_B=b.
Let F:𝒜→ℬ be a functor and B an object of ℬ. When the reflection of B along F exists, it is unique up to isomorphism.
A functor R:ℬ→ A is left adjoint to the functor F:𝒜→ℬ when there exists a natural transformation
η:1_ℬ⇒ F∘ R
such that for every B∈ℬ, (R(B),η_B) is a reflection of B along F.
[3.1.5 of <cit.>]
Consider two functors F:𝒜→ℬ and G:ℬ→𝒜. The following conditions are equivalent.
* G is left adjoint of F.
* There exist a natural transformation η:1_ℬ⇒ F∘ G and ε: G→ F⇒1_𝒜 such that
(F∗ε)∘(η∗ F)=1_F, (ε∗ G)∘(G∗η)=1_G.
* There exist bijections
θ_AB:𝒜(G(B),A)≅ℬ(B,F(A))
for every objects A and B, and those bijections are natural both in A and B.
* F is right adjoint of G.
If the functor F:𝒜→ℬ has a left adjoint then F preserves all limits which turn out to exist in 𝒜.
Consider two functors F:𝒜→ℬ, G:ℬ→𝒜 with G left adjoint to F with η:1_ℬ⇒ F∘ G and ε: G∘ F⇒1_𝒜 the two corresponding natural transformations. The following conditions are equivalent.
* F is full and faithfull.
* ε is an isomorphism.
Under these conditions, η∗ F and G∗η are isomorphisms as well.
Given a functor F:𝒜→ℬ, the following conditions are equivalent:
* F is full and faithfull and has a full and faithfull left adjoint G.
* F has a left adjoint G and the two canonical natural transformations of the adjunction η:1_ℬ⇒ F∘ G and ε: G→ F⇒1_𝒜 are isomorphisms.
* There exists a functor G:ℬ→𝒜 and two arbitrary natural isomorphisms 1_ℬ≅ F∘ G, G∘ F≅1_𝒜.
* F is full and faitfull and each object B∈ℬ is isomorphic to an object of the form F(A), for some A∈𝒜.
* The dual condition of (1).
* The dual condition of (2).
The categories 𝒜,ℬ are equivalent if there exist a functor F:𝒜→ℬ satisfying the conditions of Proposition <ref>.
alpha
|
http://arxiv.org/abs/2307.01535v1
|
20230704074156
|
Status of the muEDM experiment at PSI
|
[
"Kim Siang Khaw",
"Cheng Chen",
"Massimo Giovannozzi",
"Tianqi Hu",
"Meng Lv",
"Jun Kai Ng",
"Angela Papa",
"Philipp Schmidt-Wellenburg",
"Bastiano Vitali",
"Guan Ming Wong"
] |
hep-ex
|
[
"hep-ex",
"physics.ins-det"
] |
[email protected]
Tsung-Dao Lee Institute and School of Physics and Astronomy, Shanghai Jiao Tong University, 201210 Shanghai, China
Tsung-Dao Lee Institute and School of Physics and Astronomy, Shanghai Jiao Tong University, 201210 Shanghai, China
Beams Department, CERN, Esplanade des Particules 1, 1211 Geneva 23, Switzerland
Tsung-Dao Lee Institute and School of Physics and Astronomy, Shanghai Jiao Tong University, 201210 Shanghai, China
Tsung-Dao Lee Institute and School of Physics and Astronomy, Shanghai Jiao Tong University, 201210 Shanghai, China
Tsung-Dao Lee Institute and School of Physics and Astronomy, Shanghai Jiao Tong University, 201210 Shanghai, China
Paul Scherrer Institut, Forschungsstrasse 111, 5232 Villigen, Switzerland
Istituto Nazionale di Fisica Nucleare, Sez. di Pisa, Largo B. Pontecorvo 3, 56127 Pisa, Italy
Dipartimento di Fisica dell’Università di Pisa, Largo B. Pontecorvo 3, 56127 Pisa, Italy
Paul Scherrer Institut, Forschungsstrasse 111, 5232 Villigen, Switzerland
Istituto Nazionale di Fisica Nucleare, Sez. di Pisa, Largo B. Pontecorvo 3, 56127 Pisa, Italy
Dipartimento di Fisica dell’Università di Roma “La Sapienza”, Piazzale Aldo Moro 2, 00185 Roma, Italy
Tsung-Dao Lee Institute and School of Physics and Astronomy, Shanghai Jiao Tong University, 201210 Shanghai, China
on behalf of the PSI muEDM collaboration
Permanent electric dipole moments (EDMs) are excellent probes of physics beyond the Standard Model, especially on new sources of CP violation. The muon EDM has recently attracted significant attention due to discrepancies in the magnetic anomaly of the muon, as well as potential violations of lepton-flavor universality in B-meson decays. At the Paul Scherrer Institute in Switzerland, we have proposed a muon EDM search experiment employing the frozen-spin technique, where a radial electric field is exerted within a storage solenoid to cancel the muon's anomalous spin precession. Consequently, the EDM signal can be inferred from the upstream-downstream asymmetry of the decay positron count versus time. The experiment is planned to take place in two phases, anticipating an annual statistical sensitivity of 3×10^-21 e·cm for Phase I, and 6×10^-23 e·cm for Phase II. Going beyond 10^-21 e·cm will enable us to probe various Standard Model extensions.
Status of the muEDM experiment at PSI
Guan Ming Wong
August 1, 2023
=====================================
§ INTRODUCTION
The origin of the imbalance between baryon and anti-baryon in our universe (BAU) <cit.> remains one of the greatest mysteries in cosmology and particle physics. The size of Charge-Parity (CP) symmetry violation embedded in the Standard Model (SM) of particle physics is insufficient to explain the observed BAU <cit.>. The existence of a permanent electric dipole moment (EDM) in any elementary particle inherently violates both the P and T symmetries <cit.>. Assuming CPT symmetry, the latter implies a violation of the CP symmetry. If an EDM measurement exceeds the SM prediction, it could suggest physics beyond the Standard Model (BSM) <cit.>, thus refining our understanding of the universe's matter-antimatter asymmetry.
Recently, muon EDM has drawn considerable attention due to discrepancies between the magnetic anomaly of the muon a_μ <cit.> and the electron a_e <cit.>, along with suggestions of lepton-flavor universality violation in B-meson decay <cit.>. Of all elementary particles, only the muon permits a direct EDM measurement. The present muon EDM limit, set by the BNL Muon g-2 Collaboration, is d_μ < 1.8 × 10^-19 ,e·cm at a 95% confidence level <cit.> while the SM prediction is 1.4×10^-38 e·cm <cit.>, way beyond the reach of current technology. Consequently, the muon EDM remains one of the SM's least tested areas, with any detected signal strongly implying BSM physics.
The current muon EDM limit is derived from a "parasitic" measurement within the Muon g-2 experiment. In this context, the existence of the muon EDM induces a tilt δ=tan^-1(ηβ/2a_μ) in the g-2 precession plane, where η is a dimensionless parameter related to muon EDM and |d⃗_μ| ≈η× 4.7 × 10^-14. The current BNL limit implies a plane tilt δ around a milliradian, corresponding to an average vertical angle oscillation for emitted positrons of several tens of µrad. The sensitivity projected for the FNAL and J-PARC experiments is around the order of 10^-21 e·cm <cit.>.
§ A FROZEN-SPIN-BASED MUON EDM SEARCH AT PSI
The recently proposed frozen-spin technique <cit.> enhances sensitivity in an EDM search by cancelling a muon's anomalous spin precession in a storage magnet using a radial electric field E_f = a_μBcβγ^2, where B signifies the magnetic field confining the muon. This technique effectively “freezes” the muon's spin in relation to its momentum. If the muon has a permanent EDM, it would induce a spin rotation out of the orbital plane, leading to an observable upstream-downstream asymmetry that increases over time. By strategically placing detectors in the upstream and downstream direction of the muon storage region, an up-down asymmetry α in positron count can be observed due to increasing polarization along the direction perpendicular to the muon orbital plane, as the positron is preferentially emitted in the direction of the muon's spin. The sensitivity of the EDM measurement can then be calculated using:
σ(d_μ) = ħγ^2 a_μ/2P E_f√(N)γτ_μα
where P is the spin polarization, N is the number of detected positrons, and τ_μ is the muon lifetime.
The proposed muEDM experiment at PSI is planned in two stages. Phase-I will employ an existing PSC solenoid with a field strength of 3 T to demonstrate the essential techniques required for a muon EDM search using the frozen-spin method. As illustrated in Fig. <ref>, a surface muon beam with approximately 30 MeV/c momentum and a polarization greater than 95% will be directed into a collimation tube within a superconducting magnetic shield, creating a field-free condition. The use of correction coils within the solenoid will minimize the field gradient between the injection region at the collimation tube's exit and the storage region at the solenoid's center, thereby expanding the acceptable phase space. A coil situated at the solenoid's center will generate a weakly focusing field essential for muon storage during the EDM measurement.
An entrance scintillator, operating in anti-coincidence with a set of scintillators, will produce an entrance signal for muons within the acceptance phase space. This signal will trigger a pulsed magnetic field at the solenoid's center, converting the remaining longitudinal momentum of the incoming muons into the transverse direction. The muon will then be confined to a stable orbit of approximately 30 mm radius within the weakly focusing magnectic field. To establish the frozen-spin condition, a radial electric field of 3 kV/cm will be applied in the storage orbit region between two co-axial electrodes.
The experiment will employ a combination of silicon strip and scintillating fiber ribbon detectors to track the decay positron, enabling the measurement of the muon's anomalous precession frequency, ω_a. This measurement will serve as a sensitive probe of the magnetic field. Furthermore, by plotting ω_a against the applied electric field and interpolating it to ω_a(E)=0, we can adjust the electric field to achieve the frozen-spin condition. This procedure also enables us to measure the up-down asymmetry.
In Phase II of the experiment, we plan to employ a muon beam with a higher momentum of 125 MeV/c and a larger radial electric field of 20 kV/cm. Given that the anticipated muon orbit will be significantly larger than in Phase I, we will design a dedicated solenoid magnet specifically for this phase. A comparison of the conditions in Phase I and Phase II can be found in Table <ref>.
§ R&D PROGRESS AT PSI
From 2019 to 2022, we conducted four comprehensive test-beam measurements at PSI. In 2019, our primary focus was characterizing the phase space of potential muon beamlines. In 2020, we primarily studied the effects of multiple Coulomb scattering of low-momentum positrons. We have reported the details from these test-beam measurements in <cit.>. Currently, we are analyzing the data from the 2021 testbeam, where we characterized potential electrode materials with positrons and muons. In 2022, we tested a prototype of the muon tagger/tracker, consisting of a time projection chamber (TPC) and then tested muon entrance detector prototypes using a 27.5 MeV/c muon beam from the πE1 beamline.
In Fig. <ref>, a muon-entrance detector based on a thin plastic scintillator (gate detector) and four GNKD <cit.> plastic scintillating tiles (telescope detectors), each readout by NDL <cit.> silicon photomultipliers (SiPMs) NDL15-6060S, was developed. The SiPMs were coupled to the scintillators using BC-603 optical grease. The scintillators and readout electronics were held together by a 3D-printed holder and a rack made of resin material. This prototype was recently tested at the πE1 beamline at PSI. Signal digitization was done using WaveDAQ <cit.> based on DRS4 <cit.>. A total of 0.7 million muon events were collected during the test beam, and data analysis is ongoing to characterize the efficiency of the muon-entrance detector.
§ CONCLUSIONS AND OUTLOOK
A sensitive search of the muon EDM presents an exciting opportunity to explore BSM physics within the muon sector. The innovative frozen-spin technique has the potential to surpass the sensitivity of the existing muon g-2 storage ring approach. By leveraging the current beamlines at the Paul Scherrer Institute (PSI), it is possible to reach an unprecedented EDM sensitivity of 6×10^-23 e·cm. Currently, the development and testing of the critical components for the experimental setup are in progress, setting the stage for the Phase-I precursor experiment at PSI.
§ ACKNOWLEDGEMENTS
We would like to thank our colleagues in the muEDM collaboration for useful discussions regarding the manuscript. KSK was funded by the National Natural Science Foundation of China under Grant No. 12075151 and 12050410233.
99
WMAP:2010qai
E. Komatsu et al. [WMAP],
Astrophys. J. Suppl. 192 (2011), 18
Sakharov:1967dj
A. D. Sakharov,
Pisma Zh. Eksp. Teor. Fiz. 5 (1967), 32-35
Chupp:2017rkp
T. Chupp, P. Fierlinger, M. Ramsey-Musolf and J. Singh,
Rev. Mod. Phys. 91 (2019) no.1, 015001
Nakai:2022vgp
Y. Nakai, R. Sato and Y. Shigekami,
Phys. Lett. B 831 (2022), 137194
Dermisek:2022aec
R. Dermisek, K. Hermanek, N. McGinnis and S. Yoon,
Phys. Rev. Lett. 129 (2022) no.22, 221801
Crivellin:2018qmi
A. Crivellin, M. Hoferichter and P. Schmidt-Wellenburg,
Phys. Rev. D 98 (2018) no.11, 113002
Aoyama:2020ynm
T. Aoyama et al.
Phys. Rept. 887 (2020), 1-166
Muong-2:2021ojo
B. Abi et al. [Muon g-2],
Phys. Rev. Lett. 126 (2021) no.14, 141801
Keshavarzi:2021eqa
A. Keshavarzi, K. S. Khaw and T. Yoshioka,
Nucl. Phys. B 975 (2022), 115675
Parker:2018vye
R. H. Parker, C. Yu, W. Zhong, B. Estey and H. Müller,
Science 360 (2018), 191
Morel:2020dww
L. Morel, Z. Yao, P. Cladé and S. Guellati-Khélifa,
Nature 588 (2020) no.7836, 61-65
Fan:2022eto
X. Fan, T. G. Myers, B. A. D. Sukra and G. Gabrielse,
[arXiv:2209.13084 [physics.atom-ph]].
Crivellin:2021sff
A. Crivellin and M. Hoferichter,
Science 374 (2021) no.6571, 1051
Muong-2:2008ebm
G. W. Bennett et al. [Muon (g-2)],
Phys. Rev. D 80 (2009), 052008
Pospelov:2013sca
M. Pospelov and A. Ritz,
Phys. Rev. D 89 (2014) no.5, 056006
Yamaguchi:2020eub
Y. Yamaguchi and N. Yamanaka,
Phys. Rev. Lett. 125 (2020), 241802
Chislett:2016jau
R. Chislett [Muon g-2],
EPJ Web Conf. 118 (2016), 01005
Abe:2019thb
M. Abe et al.
PTEP 2019 (2019) no.5, 053C02
Farley:2003wt
F. J. M. Farley et al.,
Phys. Rev. Lett. 93 (2004), 052001
Adelmann:2010zz
A. Adelmann, K. Kirch, C. J. G. Onderwater and T. Schietinger,
J. Phys. G 37 (2010), 085001
Adelmann:2021udj
A. Adelmann et al.
[arXiv:2102.08838 [hep-ex]].
muonEDMinitiative:2022fmk
K. S. Khaw et al. [muon EDM initiative],
PoS NuFact2021 (2022), 136
Sakurai:2022tbk
M. Sakurai et al.
JPS Conf. Proc. 37 (2022), 020604
GNKD
Beijing Gaoneng Kedi Technology Co. Ltd., <http://www.gaonengkedi.com/pro.asp?classID1=28>, accessed Jun 7, 2023
NDL
Novel Device Laboratory, <http://www.ndl-sipm.net/indexeng.html>, accessed Jun 7, 2023
Francesconi:2018fkg
M. Francesconi et al.
[arXiv:1806.09218 [physics.ins-det]].
Ritt:2010zz
S. Ritt, R. Dinapoli and U. Hartmann,
Nucl. Instrum. Meth. A 623 (2010), 486-488
doi:10.1016/j.nima.2010.03.045
|
http://arxiv.org/abs/2307.03343v1
|
20230707011741
|
Unified Modeling and Rate Coverage Analysis for Satellite-Terrestrial Integrated Networks: Coverage Extension or Data Offloading?
|
[
"Jeonghun Park",
"Jinseok Choi",
"Namyoon Lee",
"François Baccelli"
] |
cs.IT
|
[
"cs.IT",
"cs.SY",
"eess.SP",
"eess.SY",
"math.IT"
] |
Unified Modeling and Rate Coverage Analysis for Satellite-Terrestrial Integrated Networks:
Coverage Extension or Data Offloading?
Jeonghun Park, Jinseok Choi, Namyoon Lee, and François Baccelli
J. Park is with School of Electrical and Electronic Engineering, Yonsei University, South Korea (e-mail:). J. Choi is with School of Electrical Engineering, KAIST, South Korea (email:), N. Lee is with Department of Electrical Engineering, Korea University, South Korea (e-mail:). F. Baccelli is with INRIA-ENS, France (email: )
August 1, 2023
===================================================================================================================================================================================================================================================================================================================================================================================================================
With the growing interest in satellite networks, satellite-terrestrial integrated networks (STINs) have gained significant attention because of their potential benefits. However, due to the lack of a tractable network model for the STIN architecture, analytical studies allowing one to investigate the performance of such networks are not yet available. In this work, we propose a unified network model that jointly captures satellite and terrestrial networks into one analytical framework. Our key idea is based on Poisson point processes distributed on concentric spheres, assigning a random height to each point as a mark. This allows one to consider each point as a source of desired signal or a source of interference while ensuring visibility to the typical user.
Thanks to this model, we derive the probability of coverage of STINs as a function of major system parameters, chiefly path-loss exponent, satellites and terrestrial base stations' height distributions and density, transmit power and biasing factors. Leveraging the analysis, we concretely explore two benefits that STINs provide: i) coverage extension in remote rural areas and ii) data offloading in dense urban areas.
Satellite-terrestrial integrated network, stochastic geometry, rate coverage probability.
§ INTRODUCTION
Recent advances in low Earth orbit (LEO) satellite technologies have brought breakthrough not only in the space industry, but also in wireless communications.
With the decreasing cost of launching LEO satellites, tens of thousands of LEO satellites are planned to be deployed (e.g., SpaceX Starlink <cit.>), which enables the structuration of ultra-dense wireless networks above the Earth.
One promising way to leverage satellite networks is to integrate them with the existing terrestrial network, creating a mutually complementary system that operates seamlessly <cit.>. This motivates the advent of satellite-terrestrial integrated networks (STINs).
In the literature, two main benefits of STINs have mainly been discussed:
* Coverage extension: Unlike terrestrial base stations (BSs) whose deployment can be limited in remote areas, satellites provide a relatively uniform density across all locations. For this reason, the coverage region can be extended thanks to STINs <cit.>.
* Data offloading:
In high traffic cases, pushing data traffic to the satellite network improves the rate coverage performance by balancing the loads <cit.>.
Notwithstanding these potential advantages, there has been a dearth of analytical studies investigating the rate coverage performance of STINs. A main hindrance is the lack of a unified network model that jointly captures satellite and terrestrial networks simultaneously. In this work, we address this challenge by proposing such a unified model.
Leveraging this model, we analyze the rate coverage performance of STINs, which allows one to foresee the benefits of these new architectures.
§.§ Related Work
The use of stochastic geometry has played a crucial role in modeling and analyzing wireless networks <cit.>. In particular, much of the prior work has focused on the investigation of terrestrial cellular networks. This analysis involves the distribution of a random point process on the 2D plane to model the spatial locations of the BSs. With this methodology, a wide range of cellular network types, including HetNets <cit.>, multi-antenna cellular networks <cit.>, millimeter wave (mmWave) cellular networks <cit.>, and UAVs networks <cit.> were actively explored.
Recently, with the growing interest in satellite communication systems, satellite network analysis based on tools of stochastic geometry gained great momentum <cit.>.
For instance, in <cit.>, the coverage probability of satellite networks was derived by modeling the satellite locations as a binomial point process (BPP) distributed on a certain sphere. This work was improved in <cit.> by incorporating a non-homogeneous distribution of satellites. In <cit.>, the technique of <cit.> was employed with different fading scenarios.
Despite the popularity of the BPP based analysis, this model inevitably entails complicated expressions, which limits tractability. Addressing this, in <cit.>, an analytical approach exploiting conditional Poisson point process (PPP) was proposed, wherein the PPP was distributed on a certain sphere.
The coverage analysis was then conducted by conditioning on the fact that at least 1 satellite is visible to the typical user. This makes the analytical expressions much more tractable than those of the BPP. Additionally, it was also shown that the analysis of <cit.> reflects well the actual coverage trend drawn by using realistic Starlink constellation sets.
Compared to the analysis of terrestrial networks, one crucial difference in the modeling of satellite networks is the use of a spherical point process, wherein a random point process (either PPP or BPP) is distributed on a certain sphere to model the satellite locations. The disparity between the modeling methodology used for the satellite and terrestrial networks (i.e., a planar point process vs. a spherical point process) is the main challenge to unify them into a single model for analyzing the STIN.
There exists a few works on the analysis of the cooperation between non-terrestrial and terrestrial networks <cit.>. However, their models are oversimplified.
For example, <cit.> considered a spherical point process for modeling satellite networks and a planar point process for modeling terrestrial networks separately, which cannot capture the full geometries of STINs. In <cit.>, a single satellite was considered, which is not adequate when dealing with dense satellite networks. Consequently, there is a gap in the literature regarding the rate coverage analysis of the STIN with a unified network modeling, and this gap is the primary motivation of our work.
§.§ Contributions
In this paper, we consider the downlink of a STIN, in which a user can be served either from a satellite or a terrestrial BS sharing the same spectrum (i.e., the scenario is referred as an open network). The user selects its association based on the averaged reference signal power including bias factors, where the latter can be used to control the load as in heterogeneous networks (HetNets) <cit.>. Further, the user experiences the interference coming from both the satellites and the terrestrial BSs. In this setup, our aim is to model and analyze the rate coverage performance of the considered STIN, and to provide STIN system design insights leveraging the analysis. We summarize our contributions as follows.
* Unified modeling: For modeling the STIN, it is of central importance to jointly capture the key characteristics of the satellite and the terrestrial networks. Since this is not feasible with the existing network models, we propose a novel network model. In our model, a homogeneous PPP is first distributed on two concentric spheres, where the inner sphere represents the Earth and the outer sphere represents the the satellite orbital sphere. Then, a random height is assigned to each point, so that each satellite and terrestrial BS has a different altitude. This is a key distinguishable point from the previous models <cit.> where all points are at the same altitude. The random height feature has two important motivations. First, this makes our model more realistic, as it can reflect a realistic satellite network where satellites have different altitudes <cit.>.
Second, the random height is a key to unify the satellite and the terrestrial networks into a single analytical framework. Without positive random height, the terrestrial network cannot be modeled by using a spherical point process distributed on the Earth since there is no mathematical visibility between a user and any other terrestrial BS. We will explain this in more detail in Remark <ref>.
* Rate coverage analysis: Based on the proposed model, we derive an expression for the coverage probability as a function of the system parameters, chiefly the densities, path-loss exponents, height geometries (maximum/minimum altitudes and height distributions), and bias factors of the satellites and the terrestrial BSs. To this end, we first obtain the visibility probability of the satellite and the terrestrial networks, that characterizes the probability that a user sees at least 1 satellite/terrestrial BS that it can communicate with. We note that this is not straightforward from the existing result <cit.> since each satellite/terrestrial BS has a random altitude in our model. Using this, we compute the conditional nearest satellite/terrestrial BS distance distribution and also the probability that a user is associated with the satellite/terrestrial network. Leveraging these, we derive the rate coverage probability of the STIN under the Nakagami-m fading assumption and discuss upper/lower bounds on the rate coverage.
* Analysis of STIN benefits and design insights:
Using our analysis, we concretely explore the two benefits of the STIN architecture: coverage extension in remote rural areas and data offloading in dense urban areas.
This leads to some valuable system design insights: 1) In remote rural areas where the terrestrial BSs' deployment density is very low, the STIN offers huge coverage enhancement. However, due to satellite interference, it is crucial to control the satellite density properly. 2) In dense urban areas where the user density is high, in the STIN, data traffic can be pushed to the satellite from the terrestrial BSs, provided that satellite density and the bias are suitably selected.
The remaining part of the paper is structured as follows. In Section II, we present the system model used through this paper, including our unified modeling for the STIN. In Section III, we derive key mathematical lemmas required to obtain the rate coverage probability, which is computed in Section IV. In Section V, we investigate the benefits of the STIN. Section VI concludes the paper.
§ SYSTEM MODEL
§.§ Network Model
Poisson point process on a sphere: To model the spatial locations of satellite and terrestrial BS, we consider PPPs distributed on a sphere (SPPP) <cit.>.
For better understanding, we first introduce a generic SPPP model with random heights, and then present the specific model for the satellite and terrestrial networks, respectively.
Consider a sphere defined in ℝ^3 whose center is the origin 0 and radius is fixed as R. This sphere is denoted as 𝕊_R^2={ x∈ℝ^3: x_2=R}.
Let Φ be an isotropic PPP on 𝕊_R^2.
We denote the point of this point process as { d_i^, 1 ≤ i ≤ N_, d_i^_2 = R_} and denote the density of Φ_ as λ_. The number of points on the sphere N_ is a random variable drawn from the Poisson distribution with mean λ_ |𝕊_R_^2| = 4 π R_^2 λ_.
Conditional on the fact that there are k points, these points are independently and uniformly distributed on 𝕊_R^2.
The number of points of Φ located in a particular set A⊂𝕊_R^2 is denoted as Φ(A).
Random heights:
Based on the PPP Φ, we construct a new point process by displacing each point of Φ from the sphere by a certain height. Denoting h_d_i by the height of point d_i, we define the new point as d̂_i = d_i ·(R + h_d_i)/R, so that d̂_i = R + h_d_i.
The heights of the points are assume to be independent and identically distributed (IID) random variables drawn from a certain distribution f_H on ℝ^+. Correspondingly, we let F_H be the cumulative distribution function (CDF) of the heights.
The points of this point process are hence Φ̂= {d̂_i, 1 ≤ i ≤ N, d̂_i_2 = R+ h_d_i, h_d_i∼ f_H}.
Note that Φ̂ can be seen as a marked PPP generated from Φ, wherein the mark of the point on d_i is the height h_d_i.
Throughout the paper, we assume a positive height, h_d_i≥ 0, ∀d_i ∈Φ.
Satellite network:
We now describe the satellite network model. Consider two spheres sharing the same origin. We define the radii of these spheres as R_ E and R_ S, respectively. The sphere with radius R_ E represents the Earth and the sphere with radius R_ S represents the satellite orbital sphere. We also define h_ S = R_ S - R_ E, where h_ S is the standard satellite altitude.
We now distribute a SPPP on 𝕊_R_ S^2, denoted as Φ_ S = {d_i^ S, 1 ≤ i ≤ N_ S, d_i^ S^2 = R_ S} with density λ_ S. Upon this, we displace point d_i^ S from the satellite orbital sphere by a random height h_d_i^ S, which builds Φ̂_ S = {d̂_i^ S, 1 ≤ i ≤ N_ S, d̂_i^ S^2 = R_ S + h_d_i^ S, h_d_i^ S∼ f_H_ S}.
The support of f_H_ S will be denoted as H_ S. In this modeling, Φ̂_ S represents a snapshot of the satellite spatial locations.
If h_d_i^ S = 0 for all i, then each satellite is located at the same altitude of h_ S, which corresponds to the network model in <cit.>.
We depict the conventional satellite network model without random heights and the considered satellite network model with random heights in Fig <ref>.
Terrestrial network: Similar to the satellite network, we consider a SPPP with random heights to model the terrestrial network. We first distribute a SPPP with the density λ_ T on the sphere with radius R_ E and denote this point process as Φ_ T. Subsequently, incorporating random heights h_d_i^ T∼ f_H_ T into Φ_ T, we have Φ̂_ T = {d̂_i^ T, 1 ≤ i ≤ N_ T, d̂_i^2 = R_ T + h_d_i^ T, h_d_i^ T∼ f_H_ T}. A snapshot of the spatial locations of terrestrial BSs is modeled by Φ̂_ T. The support of f_H_ T is denoted by H_ T.
If h_d_i^ T = 0 for all i, all terrestrial BSs are located on the surface of the Earth.
We also assume h_d_i^ T≥ 0.
Typical user: We also model the users' locations as a homogeneous SPPP Φ_ U = { u_i, 1 ≤ i ≤ N_ U, u_i _2 = R_ E} with density λ_ U.
Per Slivnyak's theorem <cit.>, we focus on the typical user u_1 located on (0, 0, R_ E) in Cartesian coordinates. We point out that this does not affect the statistical distribution of Φ̂_ S and Φ̂_ T. Note that the considered user model corresponds to a uniform traffic demand scenario.
More realistic traffic demands, e.g., a heavy traffic demand in an urban environments and a light traffic demand in a rural areas will be discussed later.
Typical spherical cap: Since the satellites and the terrestrial BSs are distributed on certain spheres, some points are not visible from the typical user. If so, they are neither counted as a potential source of desired signal nor as contributing to the interference. For this reason, characterizing the visibility is crucial in the coverage analysis. To this end, we use the concept of typical spherical cap.
At first, we focus on the terrestrial network. Consider a point d∈Φ_ T is on sphere 𝕊_R_ E^2 and its image point d̂∈Φ̂_ T with height h, so that d̂ is on sphere 𝕊_R_ E + h^2.
Recalling that the typical user is located at (0,0,R_ E), the outer typical spherical cap of radius r, A_(R_ E,h)^ o(r), is defined as the set of points in 𝕊_R_ E + h^2 whose distance to the typical user is no larger than r:
A_(R_ E,h)^ o(r) = {z∈𝕊_R_ E + h^2: z - u_1_2 ≤ r }.
We provide an illustration for A_(R_ E,h)^ o(r) in Fig. <ref>-(a).
It is worth to mention that given h, we will focus on r_min≤ r ≤ r_max, with
r_min = h, r_max = √(h(h+2R_ E)).
If r = r_max, then A_(R_ E,h)^ o(r_max) is the portion of sphere 𝕊_R_ E + h^2 cut off by the plane tangent to the sphere 𝕊_R_ E^2 (the Earth) at (0,0,R_ E).
The inner typical spherical cap A_(R_ E,h)^ i(r) is the spherical cap of 𝕊_R_ E^2 that has the same solid angle as A_(R_ E,h)^ o(r), namely,
A_(R_ E,h)^ i(r) = {z∈𝕊_R_ E^2: R_ E + h/R_ E·z - u_1 _2 ≤ r }.
We also provide an illustration of the inner typical spherical cap A_(R_ E,h)^ i(r) in Fig. <ref>-(a). In Fig. <ref>, we observe that, seen from the origin, the solid angles Ω
of A_(R_ E,h)^ o(r) and A_(R_ E,h)^ i(r) are the same.
The inner typical spherical cap A_(R_ E,h)^ i(r) is the region of the points d∈Φ_ T which are mapped to the points d̂∈A_(R_ E,h)^ o(r).
Now we investigate the areas of the typical spherical caps.
In the terrestrial network, the area of the outer typical spherical cap A_(R_ E,h)^ o(r) is obtained as
|A_(R_ E,h)^ o(r)| = 2π (h-h_r) (R_ E + h),
where
h_r = (R_ E + h)^2 - R_ E^2 - r^2/2R_ E,
by using Archimedes' Hat-Box Theorem <cit.>.
By the definition of the solid angle, we have
Ω = |A_(R_ E,h)^ o(r)|/(R_ E + h)^2 = 2π( h-(R_ E + h)^2 - R_ E^2 - r^2/2R_ E) /R_ E + h.
Then the area of the inner typical spherical cap A_(R_ E,h)^ i(r) is given by
|A_(R_ E,h)^ i(r)| = Ω R_ E^2 = 2π R_ E^2 ( h-(R_ E + h)^2 - R_ E^2 - r^2/2R_ E) /R_ E + h.
When r = r_max, the area of the outer/inner typical spherical caps are simplified to
|A_(R_ E,h)^ o(r_max)| = 2π h (R_ E + h), |A_(R_ E,h)^ i(r_max)| = 2π h/R_ E + h R_ E^2.
When r = r_min, the areas of the outer/inner typical spherical caps are 0.
Next, similar to the terrestrial network case, we define the outer and the inner typical spherical caps corresponding to the satellite network as A_(R_ S,h)^ o(r) and A_(R_ S,h)^ i(r) by following (<ref>) and (<ref>).
Specifically, we have
A_(R_ S,h)^ o(r) = {z∈𝕊_R_ S + h^2: z - u_1_2 ≤ r },
where r_min≤ r ≤ r_max with
r_min = R_ S + h - R_ E, r_max = √((R_ S + h)^2 - R_ E^2).
The area of the outer typical spherical cap is calculated as
|A_(R_ S,h)^ o(r)| = 2π (h_ S +h- h_r) (R_ S + h ), h_r = (R_ S + h)^2 - R_ E^2 - r^2 /2R_ E .
Accordingly, when r = r_max, then
|A_(R_ S,h)^ o(r_max)| = 2π (h_ S +h) (R_ S + h ).
The inner typical spherical cap is defined as
A_(R_ S,h)^ i(r) = {z∈𝕊_R_ S^2: R_ S + h/R_ S·z - u_1 _2 ≤ r },
where its area is
|A_(R_ S,h)^ i(r)| = Ω R_ S^2 = 2π R_ S^2 (h_ S +h- (R_ S + h)^2 - R_ E^2 - r^2 /2R_ E) /R_ S + h.
When r = r_min, the areas of the outer/inner typical spherical caps are 0. We depict the outer/inner typical spherical caps of the satellite network in Fig. <ref>-(b).
With the defined typical spherical caps, we are able to identify the visibility of the terrestrial BSs or the satellites. For example, in the terrestrial network, the terrestrial BS with height h (located at d̂_i^ T) is visible to the typical user if and only if d̂_i^ T∈A_(R_ E, h)^ o(r_max) (or equivalently, d_i^ T∈A_(R_ E, h)^ i(r_max)).
This is because, if d̂_i^ T∉A_(R_ E, h)^ o(r_max), then d̂_i^ T is located below the horizon, and the visibility is blocked by the Earth. The satellite visibility is also identified in an equivalent manner.
To characterize the visibility, we assume a set of terrestrial BSs whose heights are in [h, h+Δ h] and denote such a point process as Φ̂_ T^h⊂Φ̂_ T and Φ^h_ T⊂Φ_ T. The event that the typical user can observe at least 1 terrestrial BS is equivalent with the event that any of Φ_ T^h includes at least 1 visible terrestrial BS, namely,
V_ T = ⋃_h ∈H_ T{Φ^h_ T(A_(R_ E, h)^ i(r_max)) ≥ 1 }.
Similar to this, denoting a point process of the satellites with heights included in [h, h+Δ h]
as Φ̂_ S^h⊂Φ̂_ S and Φ^h_ S⊂Φ_ S, the event that the typical user can observe at least 1 satellite is
V_ S = ⋃_h ∈H_ S{Φ^h_ S(A_(R_ S, h)^ i(r_max)) ≥ 1 }.
For simplicity, we denote the average number of visible satellites and of visible terrestrial BSs as N̅_ S and N̅_ T, respectively.
One key difference of the visibility in the proposed network model and the conventional network model <cit.> is that, in the proposed network model, the visibility is jointly determined not only by the points' spatial locations, but also their heights. Namely, even if two satellites are located at the same position in Φ_ S, it is possible that one satellite is visible and the other one is not because of their different altitudes. In general, the higher the altitude, the more likely the visibility of the point from the typical user. This is the main reason to define multiple typical spherical caps depending on h.
§.§ Channel Model
The path-loss between the satellite located at d̂_i^ S and the typical user located at u_1 is defined as d̂_i^ S - u_1^-β_ S, where β_ S is the path-loss exponent for the satellite network. Likewise, the path-loss between the terrestrial BS located at d̂_i^ T and the typical user is defined as d̂_i^ T - u_1^-β_ T, where β_ T is the path-loss exponent for the terrestrial network.
The wireless propagation environments of the satellite network and the terrestrial network are reflected into β_ S and β_ T. For instance, if the propagation between a satellite and a user is line-of-sight (LoS), then β_ S = 2. If we consider an urban communication scenario with rich scattering for the terrestrial network, β_ T = 4.
As the small-scale fading model, we use the Nakagami-m distribution as it is capable of incorporating various fading scenarios such as the Rayleigh fading or the Rician fading. In this model, √(X_i^ S), which is the fading coefficient between satellite d̂_i^ S and the typical user, is distributed according to the probability density function (PDF) given by <cit.>:
f_√(X_i^ S) (x) = 2m_ S^m_ S/Γ(m_ S)x^2m_ S-1exp(-m_ Sx^2),
where x≥ 0 and Γ(·) is the Gamma function.
We note that the satellite network scenario can be reduced to the Rayleigh fading (m_ S = 1) or the Rician-K fading (m_ S=(K+1)^2/2K+1) by tuning m_ S.
Similar to this, in the terrestrial network, √(X_i^ T), which is the fading coefficient between terrestrial BS d̂_i^ T and the typical user, also follows the Nakagami-m distribution with parameter m_ T.
We now explain the directional beamforming gain model.
We adopt the sectored antenna model, wherein the directional beamforming gains are approximated as a rectangular function <cit.>. In such an approximation, a user gets the main-lobe gain when it is included within the main-lobe, otherwise the user has the side-lobe gain.
The sectored antenna model has been widely used in stochastic geometry based analysis because it is not only analytically tractable, but it is also suitable to reflect the primary features of the directional beamforming.
In this model, the beamforming gains in the satellite network are expressed as
G_ S = G_ S^tx G_^rxc^2/(4π f_c)^2, G̅_ S = G̅_ S^txG̅_^rxc^2/(4π f_c)^2,
where G_ S^tx (or G̅_ S^tx) is the main-lobe (or the side-lobe) beamforming gains offered at a satellite, and G_^rx (or G̅_^rx) is the main-lobe (or the side-lobe) beamforming gains obtained at a user. We assume that only the associated satellite has G_ S and the other interfering satellites have G̅_ S.
The beamforming gains in the terrestrial network are defined in the similar manner. Namely, the associated terrestrial BS has the beamforming gain G_ T and the other terrestrial BS has the beamforming gain G̅_ T.
§.§ Cell Association
To control the user populations connected to each network, we apply the biasing factors in association process.
When the satellites and the terrestrial BSs use different transmit power (P_ S and P_ T) and biasing factors (B_ S and B_ T), each satellite and terrestrial BS sends reference signals by encompassing the biasing factors, so that the effective transmit power of the reference signal is P_ S B_ S or P_ T B_ T. Averaging the randomness regarding the small-scale fading, the typical user is associated with a cell (either in the satellite or the terrestrial networks) whose average received power is strongest. Namely, the associated network o^* and the associated cell index k^* are determined by
(o^*, k^*) = max_o ∈{T, S}, k ∈ℕ P_o B_o d̂_k^o - u_1 ^-β_o.
Throughout the paper, we use o^* to indicate the network type that the typical user is associated with and assume k^* = 1 without loss of generality.
The cell association is entangled with visibility. For instance, if there is no visible satellite, then we cannot have {o^* = S}. For this reason, (o^* = S) (or (o^* = T)) implicitly means that there exists at least 1 satellite (or terrestrial BS) visible to the typical user.
§.§ Performance Metric
We separately consider the association cases: 1) association with the satellite network (o^* = S), 2) association with the terrestrial network (o^* = T).
At first, under the assumption that the typical user is associated with the satellite network, the conditional SINR is defined as
SINR_| S=G_ SP_ S X_1^ Sd̂_1^ S- u_1^-β_ S/I_ S | S + I_ T | S + σ^2.
In (<ref>), σ^2 is the noise power and I_ S| S = ∑_ d_i^ S∈B̃_ S | SG̅_ SP_ S X_i^ Sd̂_i^ S- u_1^-β_ S, I_ T| S = ∑_ d_i^ T∈B̃_ T | SG̅_ TP_ T X_i^ Td̂_i^ T- u_1^-β_ T,
where B̃_ S | S (B̃_ T | S) is the set of interfering satellites (terrestrial BSs) conditioned on that the typical user is connected to the satellite network. Based on (<ref>), letting W be the operating bandwidth, the conditional achievable rate is given as R_| S = Wlog_2 ( 1 + SINR_| S). In the same manner, when the typical user is associated with the terrestrial network, the conditional SINR is given as
SINR_| T=G_ TP_ T X_1^ Td̂_1^ T- u_1^-β_ T/I_ S | T + I_ T | T + σ^2,
where I_ S| T = ∑_ d_i^ S∈B̃_ S | TG̅_ SP_ S X_i^ Sd̂_i^ S- u_1^-β_ S, I_ T| T = ∑_ d_i^ T∈B̃_ T| TG̅_ TP_ T X_i^ Td̂_i^ T- u_1^-β_ T.
In turn, with the conditional achievable rate R_| T = Wlog_2 ( 1 + SINR_| T), the rate coverage probability is defined as
P^ cov(γ, λ_o, β_o, m_o, f_H_o, P_o, B_o, G_o, G̅_o, R_ S, R_ E) = ℙ[R_STIN > γ]
= ℙ[V_ S] π(S| V_ S)ℙ[R_| S > γ | o^* = S, V_ S] + ℙ[V_ T] π(T | V_ T)ℙ[R_| T > γ | o^* = T, V_ T],
where π(o|V_o) indicates the probability that the typical user is associated with the network type o conditioned on that at least 1 point is visible in the network type o, o ∈{S, T}.
Further, the visibility probabilities are
ℙ[V_ S] = ℙ[⋃_h ∈H_ S{Φ^h_ S(A_(R_ S, h)^ i(r_max)) ≥ 1 }], ℙ[V_ T] = ℙ[⋃_h ∈H_ T{Φ^h_ T(A_(R_ E, h)^ i(r_max)) ≥ 1 }].
[Rationale on random heights]
Our model stands out because of its feature of assigning a random height to each point on the two spheres. This feature has two crucial implications.
At first, our model is capable of reflecting more realistic scenarios. For instance, the existing network models for the satellite network <cit.> assumes that every satellite has the same altitude. This model, however, does not properly reflect the reality of satellite networks, as there are variations in the altitudes of individual satellites <cit.>. Further, in the terrestrial network, terrestrial BSs are installed on a structure such as rooftop or tower with different heights to ensure a wider coverage range. Our model is able to reflect these realistic situations.
Second, the random height is a key to unify the satellite network and the terrestrial network.
As mentioned above, the satellite network has been modeled by using a spherical point process model <cit.>, while planar point processes were mainly used for modeling terrestrial networks <cit.>. To analyze STINs, it is of importance to combine these two disparate networks into one unified model.
Although it may be tempting to adopt a spherical point process for both the satellite and terrestrial networks, this is infeasible. This is mainly because, when distributing points on the Earth to model spatial locations of terrestrial BSs, no terrestrial BS is mathematically visible from the typical user since no two points on a sphere can be connected via a straight line that does not intersect the sphere.
By assigning random heights on points for terrestrial BSs, we ensure visibility, which makes it feasible to model the terrestrial network with a spherical point process.
By doing this, we are able to capture the full complexity of the STIN into one unified analytical framework.
§ MATHEMATICAL PRELIMINARIES
In this section, before delving into the rate coverage analysis, we present some necessary mathematical ingredients in this section. We start with the probability that at least 1 satellite is visible from the typical user.
The number of visible satellites follows the Poisson distribution with parameter λ_ S 2π R_ S^2 ∫_h ∈H_ Sh_ S + h/R_ S + h F_H_ S(dh). Hence, the probability that at least 1 satellite is visible at the typical user is
ℙ[ V_ S ] = 1- exp(- λ_ S 2π R_ S^2 ∫_h ∈H_ S h_ S + h/R_ S + h F_H_ S(dh) ).
See Appendix <ref>.
The number of visible terrestrial BS follows the Poisson distribution with parameter λ_ Sλ_ T 2π R_ E^2 ∫_h ∈H_ T h/R_ E + h F_H_ T(dh). Hence, the probability that at least 1 terrestrial BS is visible at the typical user is
ℙ[ V_ T ] = 1- exp(- λ_ T 2π R_ E^2 ∫_h ∈H_ T h/R_ E + h F_H_ T(dh) ).
The proof is omitted since it is straightforward from Lemma <ref>.
Now we derive the distribution of the nearest satellite distance conditioned on that at least 1 satellite is visible as follows.
Assume that the support of h of the satellite networks is H_ S = [h_ S^min, h_ S^max]. Defining the distance of the nearest satellite to the typical user as R, the PDF of R conditioned on the fact that at least 1 satellite is visible is
f^ S_R(r| V_ S )=
ν(λ_ S, R_ S) · 2πλ_ S r
· e^-λ_ S∫_h_ S^low^h_ S^high A_R_ S(h, r) F_H_ S(dh) ,
for R_ min^ S≤ r ≤ R_ max^ S and 0 otherwise, where
ν(λ_ S, R_ S) = ∫_h_ S^low^h_ S^highR_ S^2/R_ E (h + R_ S) F_H_ S(dh) · e^-λ_ S∫_h_ S^min^h_ S^low2π R_ S^2 (h_ S + h)/R_ S + h F_H_ S(dh) /1- exp(- λ_ S 2π R_ S^2 ∫_h h_ S + h/R_ S + h F_H_ S(dh) ),
and
A_R_ S(h, r) =
0, r ≤ R_ S + h - R_ E,
2π R_ S^2 (h_ S +h- (R_ S + h)^2 - R_ E^2 - r^2 /2R_ E) /R_ S + h, [ R_ S + h - R_ E≤ r; ≤√((R_ S + h)^2 - R_ E^2) ],
2π R_ S^2 (h_ S + h)/R_ S + h, √((R_ S + h)^2 - R_ E^2)≤ r,
and also
h_ S^low = max(h_ S^min, √(r^2 + R_ E^2) - R_ S), h_ S^high = min(h_ S^max, r + R_ E - R_ S),
R^ S_ min=R_ S+h_ S^min-R_ E, R^ S_ max=√((R_ S+h_ S^max)^2-R_ E^2).
See Appendix <ref>.
Assuming h = 0 (i.e., all the satellites are on the given orbit with height h_ S),
the conditional PDF (<ref>) boils down to
f_R(r| V_ S ) = 2πλ_ SR_ S/R_ Eexp( λ_ SπR_ S/R_ E (R_ S^2 - R_ E^2) ) /exp(λ_ S 2π R_ S (R_ S - R_ E))-1 r exp(- λ_ SπR_ S/R_ E r^2 ),
for R^ S_min≤ r ≤ R^ S_max. We observe that (<ref>) is a truncated Rayleigh distribution as shown in <cit.>. In this sense, Lemma <ref> generalizes the prior result in <cit.>.
Similar to the satellite network, we also derive the conditional distribution of the nearest terrestrial BS distance as follows.
Assume the support of h for the terrestrial networks of the form H_ T = [h_ T^min, h_ T^max]. Defining the distance of the nearest terrestrial BS to the typical user as R, the PDF of R conditioned on the fact that at least 1 terrestrial BS is visible is
f^ T_R(r|V_ T ) = ν(λ_ T, R_ E) · 2πλ_ T r
· e^-λ_ T∫_h_ T^low^h_ T^high A_R_ E(h, r) F_H_ T(dh) ,
for R^ T_ min≤ r ≤ R^ T_ max and 0 otherwise, where
ν(λ_ T, R_ E) = ∫_h_ T^low^h_ T^highR_ E/R_ E + h F_H_ T(dh) × e^-λ_ S∫_h_ T^min^h_ T^low2π h/R_ E + h R_ E^2 F_H_ T(dh) /1- exp(- λ_ T 2π R_ E^2 ∫_hh/R_ E+h F_H_ T(dh) ),
and
A_R_ E(h, r)=
0, r ≤ h,
2π R_ E^2 (h- (R_ E + h)^2 - R_ E^2 - r^2 /2R_ E) /R_ E + h, h ≤ r ≤√(h (h + 2R_ E)),
2π h R_ E^2/R_ E + h, √(h (h + 2R_ E))≤ r,
and also
h_ T^low = max(h_ T^min, √(r^2 + R_ E^2) - R_ E), h_ T^high = min(h_ T^max, r ),
R^ T_ min=h_ T^min, R^ T_ max=√(h_ T^max(h_ T^max+2R_ E)).
The proof is omitted since it is straightforward from Lemma <ref>.
Next, using the acquired conditional nearest distance probabilities, we obtain the probability that the typical user is associated with the satellite network conditioned on the fact that at least 1 satellite is visible, i.e., π( S | V_ S ).
Under the condition that the typical user observes at least 1 satellite, the probability that the typical user is connected to the satellite network is given by
π( S | V_ S) = ∫_R_min^ S^R_max^ S
f^ S_R(r| V_ S ) · e^-λ_ T∫_h ∈H_ T A_R_ E(h, (P_ T B_ T/P_ S B_ S)^1/β_ T r^β_ S/β_ T) F_H_ T(dh) dr .
See Appendix <ref>.
By exploiting this, we get the following result on the nearest satellite distance conditioned on that the typical user is associated with the satellite network:
Conditioned on the fact that the typical user is associated with the satellite network, the nearest satellite distance PDF is given by
f_R^ S (r | V_ S, {o^* = S}) = f_R^ S ( r | V_ S) · e^-λ_ T∫_h ∈H_ T A_R_ E(h, (P_ T B_ T/P_ S B_ S)^1/β_ T r^β_ S/β_ T) F_H_ T(dh) /ℙ[V_ S] ·π( S | V_ S),
where ℙ[V_ S] is given in (<ref>) and π( S | V_ S) in (<ref>).
The event that the nearest satellite distance is larger than r is equivalent to
ℙ[R > r | V_ S, {o^* = S}] = ℙ[⋂_h∈H_ S{Φ_ S^h (A_(R_ S, h)(r)) = 0}, V_ S, {o^* = S} ]/ℙ[V_ S] ·π( S | V_ S)
= ∫_r^R_max^ S f_R^ S ( v | V_ S) · e^-λ_ T∫_h ∈H_ T A_R_ E(h, (P_ T B_ T/P_ S B_ S)^1/β_ T v^β_ S/β_ T) F_H_ T(dh) dv/ℙ[V_ S] ·π( S | V_ S) .
The corresponding PDF is obtained by differentiating (<ref>) with regard to r.
For conciseness, we present both the association probability of the terrestrial network and the conditional PDF of the nearest terrestrial BS's distance in the following corollary.
Under the condition that the typical user observes at least 1 terrestrial BS, the probability that the typical user is connected to the terrestrial network is
π( T| V_ T) = ∫_R_min^ T^R_max^ T
f^ T_R(r|V_ T ) · e^-λ_ S∫_h ∈H_ S A_R_ S(h, (P_ S B_ S/P_ T B_ T)^1/β_ S r^β_ T/β_ S) F_H_ S(dh) dr.
Conditioned on that the typical user is associated with the terrestrial network, the nearest terrestrial BS's distance PDF is given by
f_R^ T (r | V_ T, {o^* = T}) = f_R^ T ( r | V_ T) · e^-λ_ S∫_h ∈H_ S A_R_ S(h, (P_ S B_ S/P_ T B_ T)^1/β_ S r^β_ T/β_ S) F_H_ S(dh) /ℙ[V_ T] ·π( T | V_ T),
where ℙ[V_ T] is given (<ref>) and π( S | V_ S) in (<ref>).
It is straightforward from the satellite network case. Due to the space limitation, we omit the proof.
Finally, we obtain the Laplace transform of the aggregated interference power under the condition that the distances between the typical user and the interference sources are larger than r. The results are gathered in the following lemma.
Under the condition that the distance between the typical user and the interfering satellites is larger than r, the conditional Laplace transform of the aggregated satellites' interference power is
L_I_ S(s| r) = exp(- 2πλ_ SR_ S^2/R_ E∫_h ∈H_ S f_H_ S(h) /(h + R_ S)·(η(β_ S, m_ S, G̅_ S P_ S s, √((R_ S + h)^2 - R_ E^2)) - . .
. . η(β_ S, m_ S, G̅_ S P_ S s,min(max(r, R_ S - R_ E + h), √((R_ S + h)^2 - R_ E^2)) ) ) dh ),
where
η(β, m, s, x) = x^2/2[1 - _2F_1(-2/β, m; β - 2/β; -s x^-β/m) ].
Note that _2F_1(·, ·; ·; ·) is the Gauss-hypergeometric function defined as
_2F_1(a,b;c;z ) = Γ(c)/Γ(b)Γ(c-b)∫_0^1t^b-1(1-t)^c-b-1/(1-tz)^a d t.
Under the condition that the distance between the typical user and the interfering terrestrial BS is larger than r, the conditional Laplace transform of the terrestrial BSs' interference power is
L_I_ T(s| r) = exp(- 2πλ_ T R_ E∫_h ∈H_ T f_H_ T(h) /(h + R_ E)·(η(β_ T, m_ T, G̅_ T P_ T s, √(h(h + 2R_ E))) - . .
. . η(β_ T, m_ T, G̅_ T P_ T s,min(max(r, h), √(h(h + 2R_ E))) ) ) dh ),
See Appendix <ref>.
Now we are ready to obtain the rate coverage probability of the STIN.
§ RATE COVERAGE ANALYSIS
Leveraging the presented mathematical preliminaries, we derive the rate coverage probability of the considered STIN in the following theorem.
In the considered STIN, the rate coverage probability is represented as
ℙ[R_STIN > γ] = P^ cov(γ, λ_o, β_o, m_o, f_H_o, P_o, B_o, G_o, G̅_o, R_ S, R_ E) = P^ cov_ S + P^ cov_ T,
where P_ S^ cov is
P^ cov_ S = P_V_ S·π( S | V_ S) ·∫_R_min^ S^R_max^ S f_R^ S (r | V_ S, {o^* = S}) ·
∑_k = 1^m_ S - 1m_ S^k r^k β_ Sγ̃^k/k!(-1)^k
. ∂^k L_I_ S( s/G_ S P_ S | r ) ·L_I_ T( s/G_ S P_ S | (P_ T B_ T/P_ S B_ S)^1/β_ T r^β_ S/β_ T) ·exp(-s σ̃_| S^2 ) /∂ s^k|_s = m_ Sγ̃r^β_ S dr,
where P_V_ S = ℙ[V_ S] is given in (<ref>), π( S | V_ S) in (<ref>), f_R^ S(r | V_ S, {o^* = S}) in (<ref>), and L_I_ S (s|r) and L_I_ T (s|r) are given in (<ref>) and (<ref>), respectively. Meanwhile, P_ T^ cov is
P^ cov_ T = P_V_ T·π( T | V_ T) ·∫_R_min^ T^R_max^ T f_R^ T(r | V_ T, {o^* = T})·
∑_k = 1^m_ T - 1m_ T^k r^k β_ Tγ̃^k/k!(-1)^k . ∂^k L_I_ S ( s/G_ T P_ T | (P_ S B_ S/P_ T B_ T)^1/β_ S r^β_ T/β_ S) ·L_I_ T ( s/G_ T P_ T | r )·exp(-s σ̃_| T^2 ) /∂ s^k|_s = m_ Tγ̃r^β_ T dr,
where P_V_ T = ℙ[V_ T] is given in (<ref>), π( T | V_ T) in (<ref>), f_R^ T( r| V_ T, {o^* = T}) in (<ref>), and L_I_ S (s|r) and L_I_ T (s|r) are given in (<ref>) and (<ref>), respectively.
See Appendix <ref>.
As in Theorem 2 of <cit.>, we can obtain tractable upper/lower bounds on the rate coverage probability (<ref>) using Alzer's inequality <cit.>.
For instance, bounds on (<ref>) are given by
P_ S, B^ cov(κ_ S) = P_V_ S·π(S| V_ S) ·∫_R_min^ S^R_max^ S f_R^ S (r | V_ S, {o^* = S}) ·∑_ℓ = 1^m_ Sm_ Sℓ(-1)^ℓ+1 L_I_ S( . ℓ m_ Sκ_ S r^β_ Sγ̃/G_ S P_ S| r )
L_I_ T( . ℓ m_ Sκ_ S r^β_ Sγ̃/G_ S P_ S| (P_ T B_ T/P_ S B_ S)^1/β_ T r^β_ S/β_ T) exp(- ℓ m_ Sκ_ S r^β_ Sγ̃/G_ S P_ Sσ^2_| S)dr ,
where κ_ S = (m_ S!)^1/m_ S produces a lower bound and κ_ S = 1 produces an upper bound.
We do not include more details in this paper due to the space limitation, yet one can follow the process of Theorem 2 in <cit.> to reach the upper and lower bounds.
We can further simplify the rate coverage expressions in Theorem <ref> by assuming Rayleigh fading for both of the satellite links and the terrestrial links, i.e., m_ S = 1 and m_ T = 1.
We note that, however, the Rayleigh fading assumption does not perfectly fit with the reality, especially with satellite communications. This is because, the satellite communication links are likely to be LoS due to the scarcity of scatters in the space <cit.>. Additionally, the terrestrial communication links can also be LoS depending on the propagation environments and operating bandwidth <cit.>. Nonetheless, the Rayleigh fading assumption has been popularly adopted in the stochastic geometry literature even when studying LoS communications <cit.>, rationalized by the fact that it offers significant tractability while not hurting a major trend of the coverage performance as demonstrated in <cit.>. In the same spirit, we use the Rayleigh fading assumption in the remaining part of the paper by taking m_ S = 1 and m_ T = 1.
Now we verify the obtained rate coverage probability by comparing with the simulation results. This is done in Fig. <ref>, which includes the used system parameters in its caption. As shown in Fig. <ref>, Theorem <ref> and the rate coverage probability obtained by numerical simulations are perfectly matched. For a more thorough validation, we also check the individual rate coverage of the terrestrial network and the satellite network separately, and observe that the simulation results and the analysis also match.
§ STIN BENEFITS STUDY: COVERAGE EXTENSION AND DATA OFFLOADING
Leveraging the derived analytical results, we explore the benefits that STINs provide in two different scenarios: coverage extension in remote rural areas and data offloading in dense urban areas.
§.§ Coverage Extension in Remote Rural Areas
The deployment density of the terrestrial network is often limited in remote rural areas such as a desert or a deep mountain valley, resulting in users being left outside of coverage regions. In constrast to that, the satellite network provides a relatively consistent density as each satellite continues to move along a given orbit around the Earth.
This is where the STIN is particularly useful, as it can provide coverage to the users experiencing outages in the terrestrial network.
Our analysis can show how the coverage extension benefits by comparing the rate coverage performances between the scenario where only the terrestrial network is used and the STIN scenario. We illustrate this comparison in Fig. <ref>, whose caption includes the detailed system parameters.
Fig. <ref> demonstrates significant improvements in the rate coverage by using the STIN.
When λ_ T is low (N̅_ T = 5), the median rate (i.e., the coverage probability that 50% of users can achieve) is 0bps when only the terrestrial network is deployed, while the STIN with N̅_ S = {4, 16, 64} achieves the median rate {5.8, 4.4, 2.9}× 10^8bps. This implies that, a user experiencing outage can have 0.59Gbps with 50% chance by integrating the terrestrial network and the satellite network with N̅_ S = 4 (4 visible satellites on average).
It is important to mention that the rate coverage of the STIN decreases as the satellite density increases. This is because when the satellite density increases, the amount of interference also increases. This observation is aligned with the findings of <cit.>. For this reason, to attain the optimum coverage extension benefits from the STIN, it is essential to use a proper number of the satellites.
§.§ Data Offloading in Dense Urban Areas
In dense urban areas where the data traffic load is high, the terrestrial network can suffer from scarcity of the available wireless resources. In the STIN situation, the satellite network is capable of relieving the traffic by data offloading. To capture this, adopting the approach used in HetNets <cit.>, we first redefine the rate by incorporating the load as follows:
R_|o = W_ tot/L_olog(1 + SINR_|o),
where L_o is the load associated with the network type o with o ∈{ T, S}
Inspired by the mean load characterization in <cit.>, we approximate the load as follows:
L_ S = 1 + λ_ Uπ(S| V_ S)/λ_ S, L_ T = 1 + λ_ Uπ(T| V_ T)/λ_ T.
We clarify that (<ref>) is an approximation, not an exact form. Characterizing the load of the STIN in an exact form is very challenging since it requires to obtain the Poisson-Voronoi cell area distribution <cit.> in 3D finite spherical models. It is definitely interesting future work.
Notwithstanding an approximation, the offloading effects are properly captured in (<ref>). For instance, assume B_ S increases. Then, as given by (<ref>), the typical user is more likely to be associated to the satellite network (π(S | V_ S) increases), by which the data traffic is pushed to the satellite network. Accordingly, L_ S increases while L_ T decreases; resulting in that the available resources in the terrestrial network increase. Now, assume λ_ S increases. Then the number of users served per satellite decreases, which relieves L_ S in (<ref>).
Nonetheless, this does not necessarily increase the rate coverage since the SINR of the satellite network sharply degrades as λ_ S increases due to the increased amount of interference as observed in Fig. <ref>.
We explore data offloading in the STIN by describing the 10th-percentile rate (the rate coverage probability that 90% of users can achieve) in Fig. <ref> depending on the bias factor B_ S and the satellite density λ_ S (or N̅_ S).
In Fig. <ref>-(a), we see the optimum bias factor B_ S^⋆ for different satellite density scenarios. For example, if N̅_ S = 10, the optimum bias is B_ S^⋆ = -6.6 dB, while if N̅_ S = 100, the optimum bias is B_ S^⋆ = 0 dB.
Generally, the optimum bias is proportional to the satellite density.
This is reasonable because if sufficiently many satellites are deployed, it is advantageous to push the data traffic to the satellite network.
In Fig. <ref>-(b), we draw the 10th-percentile rate per satellite density. Provided that proper bias is used, the rate coverage performance is proportional to the satellite density, which is in sharp contrast to Fig. <ref> and <cit.>.
This is because, as the satellite density increases, the available resources of the satellite network W_ tot/L_ S increase as L_ S diminishes. Meanwhile, SINR_| S decreases since the amount of interference increases. In turn, by incorporating the load L_ S, we have a trade-off in the rate coverage (<ref>) with regard to λ_ S. In the regime drawn in Fig. <ref>, the load has a dominant effect, so relieving the load even by sacrificing the SINR helps to improve the rate coverage performance.
To further investigate the trade-off of the rate coverage performance, we illustrate the 10th-percentile rate per user density λ_ U in Fig. <ref>. If λ_U is small, then L_o→ 1 so that the offloading effect becomes negligible. In this regime, the SINR is dominant to determine the rate, thereby the rate coverage performance decreases as the satellite density increases.
This is shown in Fig. <ref>, where this regime is referred to the SINR-limited regime.
In contrast, if λ_U is high, the offloading effect becomes significant, so the load L_o is dominant for the rate. Accordingly, the rate coverage performance is proportional to the satellite density. We call this regime the load-limited regime, as presented in Fig. <ref>. We note that even in the SINR-limited regime, the rate coverage does not monotonically increase as the satellite density decreases. Specifically, if λ_ S→ 0, no satellite is visible to the typical user, so that the rate coverage decreases as depicted in Fig. <ref>. For this reason, there exists a threshold λ̌_ S such that ℙ[R_STIN > γ] ∝ 1/λ_ S when λ_ S > λ̌_ S. This is also the case of the load-limited regime, i.e., ℙ[R_STIN > γ] ∝λ_ S when λ_ S < λ̂_ S.
Due to the contrasting behavior of the SINR-limited regime and the load-limited regime, it is essential to identify the operating regime of the STIN in order to achieve optimum performance gains. For instance, in the SINR-limited regime, integrating more satellites into the STIN only degrades the rate coverage performance.
In contrast, in the load-limited regime, it is beneficial to include more satellites into the STIN for more active data offloading.
§ CONCLUSIONS
In this paper, we have developed a unified modeling for the two components of a STIN. Our key idea for capturing both of the satellite network and the terrestrial network into one framework is distributing PPPs on spheres and assigning random heights. Based on this model, we have derived the rate coverage probability as a function of the key system parameters, chiefly the densities, path-loss exponents, height geometries (maximum/minimum altitudes and height distributions), and bias factors of the satellites and the terrestrial BSs.
Leveraging the analysis, we have explored the two benefits of the STIN: coverage extension in remote rural areas and data offloading in dense urban areas. Through this, we have extracted valuable system design insights: 1) In remote rural areas, the STIN significantly increases the rate coverage performance; yet to obtain this, it is necessary to carefully choose the satellite density. 2) In dense urban areas, the STIN helps to offload the data traffic from the terrestrial BSs, provided that the satellite density and the bias are properly selected. 3) The STIN has two contrasting operating regimes; the SINR-limited regime where increasing the satellite density degrades the rate coverage, and the load-limited regime where increasing the satellite density improves the rate coverage. For efficiently operating a STIN, it is needed to carefully identify the appropriate operation regime.
§ PROOF OF LEMMA <REF>
We first recall that satellite at d̂_i^ S (with height h) is visible at the typical user if and only if d_i^ S∈A^ i_(R_ S, h) (r_max) (or equivalently d̂_i^ S∈A^ o_(R_ S, h) (r_max)). For simplicity, we write A(h) = A^ i_(R_ S, h) (r_max). Let N = ∑_i 1{d_i^ S∈A(h_i)} where h_i is the height of satellite at d_i^ S, then N_v represents the number of visible satellites. For 0≤ z ≤ 1, the probability generating function (PGF) of N is
𝔼[z^N] = 𝔼[e^∑_i ln z 1{d_i^ S∈A (h)}] =^(a)exp(-∫_h ∈H_ S∫_x ∈A(h) (1-z) 1{d_i^ S∈A(h_i) }Φ_ S (d x) F_H_ S(dh) )
= exp(- λ_ S (1-z) ∫_h ∈H_ S |A(h)| F_H_ S(dh))
= exp(- λ_ S (1-z) 2π R_ S^2 ∫_h ∈H_ Sh_ S + h/R_ S + h F_H_ S(dh)),
where (a) comes from the Laplace functional of an independently marked PPP and F_H_ S is the CDF of satellite height H_ S.
Hence N is Poisson with parameter λ_ S 2π R_ S^2 ∫_h ∈H_ Sh_ S + h/R_ S + h F_H_ S(dh), we have
ℙ[N = 0] = exp(- λ_ S 2π R_ S^2 ∫_h ∈H_ Sh_ S + h/R_ S + h F_H_ S(dh)).
This completes the proof.
§ PROOF OF LEMMA <REF>
Denoting R by the nearest satellite distance to the typical user, the event R > r is equivalent to the event that there is no satellite with a height h in A^ i_(R_ S, h)(r).
Denoting Φ_ S^h as a set of satellites with height h, we represent the CCDF of R conditioned on the fact that at least 1 satellite is visible to the typical user as
F_R |{V_ S}^c(r) = ℙ [R > r | V_ S] =^(a)( [ ℙ[ ⋂_h ∈H_ S{Φ_ S^h(A_(R_ S, h)^ i(r)) = 0}] ·; (1- ℙ[ ⋂_h ∈H_ SΦ_ S^h(A_(R_ S, h)^ i(r_max) \A_(R_ S, h)^ i(r)) =0 ]) ]) /ℙ[ V_ S]
where (a) follows from the fact that the PPP in A_(R_ S, h)^ i(r) and A_(R_ S, h)^ i(r_max) \A_(R_ S, h)^ i(r) are independent since their sets do not overlap.
Now we compute the first term in the numerator of (<ref>) as
ℙ[⋂_h ∈H_ S{Φ_ S^h(A_(R_ S, h)^ i(r)) = 0}] = ∏_h ∈H_ Sℙ[Φ_ S^h(A_(R_ S, h)^ i(r)) = 0]
= ∏_h ∈H_ Sexp(-λ_ S f_H_ S(h) Δ h A_R_ S(h, r) ) = exp(-λ_ S∫_h ∈H_ S A_R_ S(h, r) F_H_ S(dh) ),
where A_R_ S(h, r) is an area function of A_(R_ S, h)^ i(r) made feasible for the whole region of r, defined as (<ref>).
The second term in the numerator of (<ref>) is
ℙ[ ⋂_h ∈H_ SΦ_ S^h(A_(R_ S, h)^ i(r_max) \A_(R_ S, h)^ i(r)) = 0 ] =exp(-λ_ S∫_h ∈H_ S (A_R_ S(h, r_max) - A_R_ S(h, r)) F_H_ S (dh) ),
where A_R_ S(h, r) is (<ref>). Now we put (<ref>) and (<ref>) together, which leads to
ℙ [R > r | V_ S] = {[ exp(-λ_ S∫_h A_R_ S(h, r) F_H_ S(dh) ) ·; [1- exp(-λ_ S∫_h (A_R_ S(h, r_max) - A_R_ S(h, r)) F_H_ S (dh) ) ] ]}/1- exp(- λ_ S 2π R_ S^2 ∫_h h_ S + h/R_ S + h F_H_ S(dh) ).
We finally derive the conditional PDF. Since the conditional PDF is obtained by taking derivative to the conditional CDF regarding r, we have
f^ S_R| {V_ S}(r) = ∂ F_R |{V_ S}(r)/∂ r = λ_ S∫_hA'_R_ S(h,r) F_H_ S(dh) · e^-λ_ S∫_h A_R_ S(h, r) F_H_ S(dh) /1- exp(- λ_ S 2π R_ S^2 ∫_h h_ S + h/R_ S + h F_H_ S(dh) ),
where A'_R_ S(h,r) is the derivative of A_R_ S(h,r) with regard to r obtained as
A'_R_ S(h,r) = ∂A_R_ S(h,r)/∂ r =2π r R_ S^2/R_ E (h + R_ S),
if R_ S + h - R_ E≤ r ≤√((R_ S + h)^2 - R_ E^2) and A'_R_ S(h, r) = 0 otherwise.
Plugging (<ref>) into (<ref>), we reach
λ_ S∫_h_ S^low^h_ S^high2π r R_ S^2/R_ E (h + R_ S) F_H_ S(dh) /1- exp(- λ_ S 2π R_ S^2 ∫_h h_ S + h/R_ S + h F_H_ S(dh) ) e^-λ_ S∫_h_ S^low^h_ S^high A_R_ S(h, r) F_H_ S(dh) e^-λ_ S∫_h_ S^min^h_ S^low2π R_ S^2 (h_ S + h)/R_ S + h F_H_ S(dh)
where h_ S^low and h_ S^high are defined in (<ref>).
Noting that the distance r can have a feasible range of R_min^ S≤ r ≤ R_max^ S as defined in (<ref>),
we complete the proof.
§ PROOF OF LEMMA <REF>
The typical user is connected to the satellite network when a satellite provides the maximum reference signal power, i.e., P_ S B_ Sd̂_1^ S - u_1 ^-β_ S
> P_ T B_ Td̂_i^ T - u_1 ^-β_ T, d̂_i^ T∈Φ̂_ T.
Calculating the above probability, we get
ℙ[P_ S B_ Sd̂_1^ S - u_1 ^-β_ S
> P_ T B_ Td̂_i^ T - u_1 ^-β_ T, d̂_i^ T∈Φ̂_ T]
= 𝔼_R[ ℙ[ . (P_ T B_ T/P_ S B_ S)^1/β_ T
R^β_ S/β_ T < d̂_i^ T - u_1 , d̂_i^ T∈Φ̂_ T| R]].
The inner probability of (<ref>) is equivalent to the probability that there is no terrestrial BS whose distance to the typical user is smaller than (P_ T B_ T/P_ S B_ S)^1/β_ T R^β_ S/β_ T.
We compute this as follows. Denoting that a set of terrestrial BSs whose heights are in [h, h+Δ h] as Φ̂_ T^h⊂Φ̂_ T and Φ^h_ T⊂Φ_ T, Φ̂_ T^h is a homogeneous PPP with the density of λ_ T^h=^λ_ T (F_H_ T(h + Δ h) - F_H_ T(h)) =^λ_ T f_H_ T(h) Δ h, where Δ h → 0.
Further, Φ̂_ T^h_1 and Φ̂_ T^h_2 are statistically independent if h_1 ≠ h_2.
Hence, for given R, the inner probability of (<ref>) is
ℙ[⋂_h ∈H_ T{Φ_ T^h (A_(R_ E, h)^ i((P_ T B_ T/P_ S B_ S)^1/β_ T
R^β_ S/β_ T)) = 0 }]
= ∏_h ∈H_ Texp(-λ_ T f_H_ T(h) Δ h | A_(R_ E, h)^ i((P_ T B_ T/P_ S B_ S)^1/β_ T
R^β_ S/β_ T) | )
= exp(-λ_ T∑_ h ∈H_ T f_H_ T(h) Δ h A_R_ E(h, r) ) =^(a)exp(-λ_ T∫_h A_R_ E(h, (P_ T B_ T/P_ S B_ S)^1/β_ T R^β_ S/β_ T) F_H_ T(dh) ),
where (a) comes from Riemann integration with Δ h → 0 and
A_R_ E(h, r) is an extended area function of A_(R_ E, h)^ i(r) defined in (<ref>)
The final step is marginalizing (<ref>) with regard to R. Since R is distributed as (<ref>), we reach
π( S | V_ S) = ∫_rν(λ_ S, R_ S) 2πλ_ S r
e^-λ_ S∫_h_low^h_high A_R_ S(h, r) F_H_ S(dh) · e^-λ_ T∫_h A_R_ E(h, (P_ T B_ T/P_ S B_ S)^1/β_ T r^β_ S/β_ T) F_H_ T(dh) dr.
This completes the proof.
§ PROOF OF LEMMA <REF>
Conditioned on that the distance to the interference source is larger than r, the aggregated satellite interference is I_ S = ∑_d̂_i^ S - u_1 > rG̅_ SP_ S X_i^ Sd̂_i^ S- u_1^-β_ S. We first note that the aggregated satellite interference can be further separated into I_ S = ∑_h ∈H_ S I_ S^h
where I_ S^h represents the interference coming from the satellites with height h and A_(R_ S, h)^ i, c(r) = A_(R_ S, h)^ i(r_max) \A_(R_ S, h)^ i(r).
We note that I_ S^h_1 and I_ S^h_2 are independent for h_1 ≠ h_2 due to independent thinning of a PPP.
Accordingly, the conditional Laplace transform is written as
L_I_ S(s|r) = 𝔼[ . e^-s I_ S| d̂_1^ S - u_1 = r]
= ∏_h ∈H_ S𝔼[ . e^-s I_ S^h| d̂_1^ S - u_1 = r ].
Then we have
𝔼[ . e^-s I_ S^h| d̂_1^ S - u_1 = r ] =^(a)exp(-λ_ Sf_H_ S(h) Δ h ∫_v ∈A_(R_ S, h)^ i, c(r) 1 - 𝔼[ e^-s G̅_ S P_ S X_i^ S v^-β_ S] dv )
=^(b)exp(-λ_ Sf_H_ S(h) Δ h ∫_v ∈A_(R_ S, h)^ i, c(r) 1 - 1/(1 + s G̅_ SP_ S v^-β_ S/m_ S)^m_ S dv )
=^(c)exp(-λ_ S f_H_ S(h) 2π R_ S^2 Δ h/R_ E(h + R_ S)·∫_min(max(r, R_ S - R_ E + h),√((R_ S + h)^2 - R_ E^2)) ^√((R_ S + h)^2 - R_ E^2). . ( 1 - 1/(1 + s G̅_ SP_ S v^-β_ S/m_ S)^m_ S) v dv ),
where (a) comes from the probability generating functional (PGFL) of a PPP, (b) follows that X_i^ S is distributed as the Gamma distribution, and (c) is because
∂ |A_(R_ S, h)^ i(r)|/∂ r =2π r R_ S^2/R_ E (h + R_ S), R_ S - R_ E + h ≤ r ≤√((R_ S + h)^2 - R_ E^2).
Combining all h ∈H_ S, we reach
L_I_ S(s|r) =
exp( - 2πλ_ SR_ S^2/R_ E∫_h ∈H_ S f_H_ S(h) /(h + R_ S)·∫_min(max(r, R_ S - R_ E + h), √((R_ S + h)^2 - R_ E^2))^√((R_ S + h)^2 - R_ E^2). . ( 1 - 1/(1 + s G̅_ SP_ S v^-β_ S/m_ S)^m_ S) v dv dh )
= exp(- 2πλ_ SR_ S^2/R_ E∫_h ∈H_ S f_H_ S(h) /(h + R_ S)·(η(β_ S, m_ S, G̅_ S P_ S s, √((R_ S + h)^2 - R_ E^2)) - . .
. . η(β_ S, m_ S, G̅_ S P_ S s,min(max(r, R_ S - R_ E + h), √((R_ S + h)^2 - R_ E^2)) ) ) dh ),
where η(β, m, s, x) is (<ref>), obtained
by the integration ∫ 1 - 1/(1 + (s v^-β)/m)^m v dv = η(β, m, s, v).
The conditional Laplace transform of the terrestrial interference is derived in the equivalent manner. This completes the proof.
§ PROOF OF THEOREM <REF>
We recall that the rate coverage probability is expressed as
P^ cov(γ, λ_o, β_o, m_o, f_H_o, R_ S, R_ E)
= ℙ[V_ S] π(S| V_ S)ℙ[R_| S > γ | {o^* = S}, V_ S] + ℙ[V_ T] π(T | V_ T)ℙ[R_| T > γ | {o^* = T}, V_ T].
Conditioned on that the typical user is associated with the satellite network, the SINR is
SINR_| S= X_1^ Sd̂_1^ S- u_1^-β_ S/Ĩ_ S | S + Ĩ_ T | S + σ̃_| S^2,
where Ĩ_ S| S = ∑_ d_i^ S∈B̃_ S | SG̃_ S/ SX_i^ Sd̂_i^ S- u_1^-β_ S, Ĩ_ T| S = ∑_ d_i^ T∈B̃_ T | SG̃_ T / SP̃_ T/ S X_i^ Td̂_i^ T- u_1^-β_ T, and σ̃_| S^2 = σ^2 /G_ S P_ S.
We also have G̃_ S/ S = G̅_ S / G_ S, G̃_ T / S = G̅_ T / G_ S, and P̃_ T/ S = P_ T / P_ S. Since the conditional rate coverage probability is
ℙ[ R_| S > γ | {o^* = S}, V_ S] = ℙ[ SINR_| S > γ̃, | {o^* = S}, V_ S],
where γ̃= 2^γ/W-1, the conditional probability (<ref>) is derived as
ℙ[ SINR_| S > γ̃| {o^* = S}, V_ S] = 𝔼[ ℙ[ SINR_| S > γ̃| {o^* = S}, V_ S, r] ]
=^(a)𝔼[ ∑_k = 1^m_ S - 1m_ S^k r^k β_ Sγ̃^k/k!(-1)^k . ∂^k L_I_ tot- S(s) /∂ s^k|_s = m_ Sγ̃r^β_ S],
where (a) follows that √(X_1^ S) follows the Nakagami-m distribution with m_ S and L_I_ tot-S(s) = L_I_ S( s/G_ S P_ S | r ) ·L_I_ T( s/G_ S P_ S | (P_ T B_ T/P_ S B_ S)^1/β_ T r^β_ S/β_ T) ·exp(-s σ̃_| S^2 ). We note that the expectation in (<ref>) is associated with the conditional nearest satellite distance whose PDF is obtained in Lemma <ref>. Plugging (<ref>) into (<ref>), we get (<ref>).
Next, we obtain the rate coverage probability under the condition that the typical user is associated with the terrestrial network. Similar to the satellite network association case, we have
SINR_| T= X_1^ Td̂_1^ T- u_1^-β_ T/Ĩ_ S | T + Ĩ_ T | T + σ̃_| T^2,
where Ĩ_ S| T = ∑_ d_i^ S∈B̃_ S | TG̃_ S/ TP̃_ S/ T X_i^ Sd̂_i^ S- u_1^-β_ S,
Ĩ_ T| T = ∑_ d_i^ T∈B̃_ T | SG̃_ T / T X_i^ Td̂_i^ T- u_1^-β_ T, and
σ̃_| T^2 = σ^2 /G_ T P_ T, with
G̃_ S/ T = G̅_ S / G_ T, G̃_ T / T = G̅_ T / G_ T, and P̃_ S/ T = P_ S / P_ T.
The rate coverage probability for the terrestrial network association case is ℙ[ R_| T > γ | {o^* = T}, V_ T] = 𝔼[ ℙ[ SINR_| T > γ̃, | {o^* = T}, V_ T, r ] ],
which gives
𝔼[ ℙ[ SINR_| T > γ̃, | {o^* = T}, V_ T, r ] ] = 𝔼[ ∑_k = 1^m_ T - 1m_ T^k r^k β_ Tγ̃^k/k!(-1)^k ·. ∂^k L_I_ tot- T(s) /∂ s^k|_s = m_ Tγ̃r^β_ T],
where L_I_ tot-T(s) = L_I_ S ( s/G_ T P_ T | (P_ S B_ S/P_ T B_ T)^1/β_ S r^β_ T/β_ S) ·L_I_ T ( s/G_ T P_ T | r )·exp(-s σ̃_| T^2 ). Since the expectation in (<ref>) is associated with the conditional nearest terrestrial BS's distance whose PDF is obtained in Corollary <ref>. Eventually, we reach (<ref>). This completes the proof.
IEEEtran
|
http://arxiv.org/abs/2307.02790v1
|
20230706060535
|
Sensor Allocation and Online-Learning-based Path Planning for Maritime Situational Awareness Enhancement: A Multi-Agent Approach
|
[
"Bach Long Nguyen",
"Anh-Dzung Doan",
"Tat-Jun Chin",
"Christophe Guettier",
"Estelle Parra",
"Ian Reid",
"Markus Wagner"
] |
cs.MA
|
[
"cs.MA"
] |
Sensor Allocation and Online-Learning-based Path Planning for Maritime Situational Awareness Enhancement: A Multi-Agent Approach
Bach Long Nguyen, Anh-Dzung Doan, Tat-Jun Chin, Christophe Guettier, Estelle Parra, Ian Reid, and Markus Wagner
Bach Long Nguyen and Markus Wagner are with the Department of Data Science and AI, Monash Univeristy, Clayton VIC 3800, Australia (email: [email protected], [email protected]).
Anh-Dzung Doan, Tat-Jun Chin and Ian Reid are with Australian Institute for Machine Learning, The University of Adelaide, Adelaide SA 5000, Australia (email: [email protected], [email protected], [email protected]).
Christophe Guettier and Estelle Parra are with Safran Electronics and Defense, Massy 91300, France (email: [email protected], [email protected]).
=======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Countries with access to large bodies of water often aim to protect their maritime transport by employing maritime surveillance systems. However, the number of available sensors (e.g., cameras) is typically small compared to the to-be-monitored targets, and their Field of View (FOV) and range are often limited. This makes improving the situational awareness of maritime transports challenging. To this end, we propose a method that not only distributes multiple sensors but also plans paths for them to observe multiple targets, while minimizing the time needed to achieve situational awareness. In particular, we provide a formulation of this sensor allocation and path planning problem which considers the partial awareness of the targets' state, as well as the unawareness of the targets' trajectories. To solve the problem we present two algorithms: 1) a greedy algorithm for assigning sensors to targets, and 2) a distributed multi-agent path planning algorithm based on regret-matching learning. Because a quick convergence is a requirement for algorithms developed for high mobility environments, we employ a forgetting factor to quickly converge to correlated equilibrium solutions. Experimental results show that our combined approach achieves situational awareness more quickly than related work.
Correlated equilibrium, FOV, maritime situational awareness, multi-agent, multiple targets, multiple sensors,
regret-matching-learning.
§ INTRODUCTION
In Australia, 99% of all exports rely on maritime transport; in recognition of this, the Departure of Home Affairs has recently commenced the Future Maritime Surveillance Capability project <cit.>. One of that project's aims is to reduce current and emerging threats to Australian maritime transport. According to <cit.>, threats include unauthorized maritime arrivals, maritime terrorism, and piracy, robbery or violence at sea.
In addition to governmental institutions,
researchers have also increasingly been paying attention to the ocean <cit.>. In particular, achieving maritime situational awareness is
an important task for researchers when they are designing maritime surveillance systems. Such systems need to deal with complex and time-varying environments. Specifically, targets[We use the term “target” to identify any vessel that is of interest. Our work is strictly defensive in nature.], such as fishing boats or sailing boats, can frequently change their directions, and their long-term trajectories are typically unknown. In addition, the number of available sensors, e.g., cameras, may be quite low (compared to the number of vessels present in commercial straits or harbours) while detecting range and FOV of cameras are limited. As a result, resource allocation and sensor path-planning in real time are essential to address these issues <cit.>.
In this paper, we consider scenarios where targets can initially be outside the coverage range of some sensors. Moreover, our sensors are cameras, radars and automatic identification systems (AIS[AIS: <https://en.wikipedia.org/wiki/Automatic_identification_system>]).
However, unlike <cit.> and <cit.>
, we place these cameras onboard of boats, e.g. guard boats or unmanned surface vehicles (USVs); thus, the sensors can move forward towards their targets in order to take photos or videos of these targets. Here, we assume that both the sensors and their targets move across the surface of a sea.
We define by 𝒞 and 𝒱 a set of available cameras used and a set of targets, respectively. Thus, |𝒞| is the number of cameras while |𝒱| is the number of targets. Each target v ∈𝒱 moves around in the area that is to be monitored, e.g., Sydney Harbor in Australia (see Fig. <ref>), and the target may initially not reside within the sensing range of any camera c ∈𝒞. Here, the cameras are not only partially aware of targets' states[Partial awareness means that the cameras know that targets exist, but they do not know exactly where the targets are.], but they are also unaware of target trajectories. Thus, we need to estimate the targets' locations over a time horizon. First, we assume that the motion of |𝒱| targets is linear without any changes in moving speeds or directions. This assumption is reasonable because ships typically travel along sea lanes in harbors according to maritime vessel traffic <cit.>. Then, we rely on this assumption and our measurements that are collected by a stationary radar r and the AIS
to estimate future positions of targets.
In practice, radars and
cameras can determine a range and bearing on their own
to targets, while AIS broadcast information about location and speed of vessels <cit.>. Like <cit.>, we assume that the radar, AIS and cameras provide us with observations of the targets' location. Then, we estimate the targets' location using Kalman filters and joint probabilistic data association. Cameras in the set 𝒞 will use the estimate to independently plan a path to move forward until the targets are inside their FOV. Here, they generate their own paths by picking a moving angle and a speed from a finite set of angles and a finite set of speed levels, respectively.
In addition, we have to allocate the cameras to the targets and plan the cameras' trajectories. Here, we are partially aware of target state through measurements from the radar and AIS while the target trajectories are unknown. It makes both allocating limited resources and planning cameras' trajectories challenging.
§.§ Background
Due to uncertainties in high mobility environments like maritime situations <cit.>, a number of research projects focus on maximizing sensors' coverage area or on planning paths for sensors to accurately sense targets. Specifically, <cit.> attempts to maximize sensor coverage in wireless sensor networks (WSN) using a local search-based algorithm. In addition, the authors mathematically determine the upper and lower bounds of coverage deployment. Compared to <cit.>, who minimize the
WSN cost, <cit.> uses a meta-heuristic algorithm to optimize the number of sensors and their locations. In contrast to these works, <cit.> considers uncertainties of response time and demand in the problem of locating emergency service facilities. Relying on an uncertainty theory, the authors model the maximal location problem under the uncertainties; then, they transform the model into an equivalent deterministic problem to solve it. However, all these works require knowledge of the target trajectories.
To track targets moving with an unknown trajectory, <cit.> estimate targets' position and velocity, and then determine
trajectory for autonomous underwater vehicles (AUVs) or UAVs. For example, based on range measurement, <cit.> predicts a target's future position based on its past trajectory and estimated velocity. Then, a reference path that closely follows the target's predicted path is generated. A controller uses feedback from the AUV's sensors to follow the reference path while maintaining a safe distance from the target.
Similarly, <cit.> and <cit.> employ data collection, target analysis, resource optimization or route optimization, while their goal is to improve the accuracy and effectiveness in tracking multiple targets. Interestingly, the work of <cit.> is effective only when sensors are stationary and when targets always stay inside the sensors' sensing range. As another limitation, <cit.> demonstrate a resource management scheme in the scenario where only one mobile sensor is employed.
To plan paths for multiple mobile sensors or UAVs, <cit.> propose algorithms based on the partially observable Markov decision process (POMDP) framework <cit.>. In <cit.>, UAVs are guided to track multiple ground targets by selecting the best velocity and heading angle for the UAVs. That work also takes the effects of wind into account when controlling UAVs. Instead of designing a path-planning algorithm, <cit.> focuses on determining the positions of UAVs in the future with the target which is to search and track multiple mobile objects. Both <cit.> and <cit.> do not resolve the issue where the targets do not traverse within the coverage range of the sensors initially. Unlike <cit.>, the work of <cit.> introduces an algorithm which plans for multiple sensors to follow multiple targets leaving the sensors’ observation range.
One significant issue with the algorithms in <cit.> is that they are efficient (or demonstrated) in scenarios where the number of sensors and targets is limited to three and five, respectively. Additionally, these algorithms may not converge to an optimal solution. To tackle this issue, <cit.> and <cit.> have utilized regret matching learning, a form of online learning, to create multi-agent system algorithms whose convergence is guaranteed. The effectiveness of regret-matching-learning-based algorithms is shown in various applications such as seasonal forecasting and matching markets without incentives <cit.>. Additionally, the regret matching learning-based algorithms in <cit.> and <cit.>, have been shown to converge to correlated-equilibrium solutions more quickly than reinforcement-learning-based algorithms.
§.§ Contribution
In this paper, we propose a multi-sensor allocation and planning approach to observe multiple targets with unknown trajectories, while minimizing the time required to achieve situational awareness.
Unlike <cit.>, we consider that the number of available mobile cameras can be large, e.g., 30 cameras, and that it is lower than the number of targets. The targets are (at times) outside the limited FOV of cameras. Due to the partial awareness of target state, as well as the unawareness of target trajectory in the next time, planning multiple cameras' paths to observe all multiple targets during a short time in this scenario is a non-trivial problem. To solve this problem, we will assign cameras to targets; these cameras then make their own plan to move forward to the targets using regret-matching learning. We summarize the contributions of this paper in the following.
* Unlike <cit.>, we formulate a problem to not only assign cameras to targets but also to generate a trajectory for each camera over a time horizon for the given maritime situation. In particular, the partial awareness of target state, and the unawareness of target trajectory are taken into account in our problem formulation.
* Due to the complexity and the large size of our formulated problem, i.e., a large number of sensors and targets, or a long time horizon, we propose two algorithms:
a sensor allocation algorithm and a distributed multi-agent path planning algorithm, to deal with the problem. Using the sensor-target distance and the duration that a target has not been observed, the former allocates cameras to targets to shorten the duration of observing all the targets. Subsequently, the latter employs
regret-matching learning so that the sensors create their trajectory as individual agents in a multi-agent system. In our online-learning process, we use a forgetting factor to make the proposed distributed algorithm converge quickly to a correlated equilibrium,
similar to our previous work <cit.>.
* We evaluate the performance using computational experiments that extend scenarios from the literature by several orders of magnitude. The results demonstrate the efficiency of the proposed approach in situational awareness improvement and observation duration.
The remainder of the paper is organized as follows. Section <ref> presents the system model, including a dynamic model and a measurement model, while Section <ref> describes the problem formulation for sensor allocation and path planning over a time horizon. Then, we describe in Section <ref> our multi-sensor allocation algorithm and our distributed multi-agent path planning algorithm. Section <ref> reports on our experiments to illustrate the efficiency of the proposed approach. Finally, we summarize the paper in Section <ref>.
§ SYSTEM MODEL
To formulate the problem, we need to model the mobility of targets, as well as the observations (measurements) of radars, AIS and cameras. The models of target motion and sensor measurement are described in the following subsections.
§.§ Dynamic Model
Like <cit.>, we assume that each target in our maritime scenario follows a nearly-constant velocity (NCV) model. The NCV model is given by:
x^v_t+1 = F_t ·x^v_t + w^v_t
with
x^v_t =(x^v_t x^v_t y^v_t
y^v_t)^T,
F_t = [ 1 Δ t 0 0; 0 1 0 0; 0 0 1 Δ t; 0 0 0 1 ],
where x^v_t and y^v_t are target v's position while x^v_t and y^v_t are target v's velocity at time step t in a 2-D coordinate system. x^T is the transposition of vector x. In addition, Δt is the duration of each time step. w^v_t ∼ N(0,Q^v_t) is zero-mean white Gaussian process noise with the following covariance Q^v_t.
Q^v_t = σ_v^2[ (Δ t)^3/3 (Δ t)^2/2 0 0; (Δ t)^2/2 (Δ t)^3/3 0 0; 0 0 (Δ t)^3/3 (Δ t)^2/2; 0 0 (Δ t)^2/2 (Δ t)^3/3 ].
§.§ Measurement Model
Like <cit.>, we model the observations of the radar r, AIS and each camera c ∈𝒞, respectively, in the following.
z^rv_t = H^r_t ·x^v_t + v^rv_t
z^AISv_t = H^AIS_t ·x^v_t + v^AIS_t
z^cv_t = H^c_t ·x^v_t + v^cv_t
with
H^r_t = H^AIS_t = H^c_t = [ 1 0 0 0; 0 0 1 0 ],
where v^rv_t ∼ N(0,R^rv_t), v^AIS_t ∼ N(0,R^AIS_t) and v^cv_t ∼ N(0,R^cv_t) are additive noise terms. Then, R^rv_t, R^AIS_t and R^cv_t are determined as follows:
R^rv_t = σ^rv_t^2
[ 1 0; 0 1 ]
R^AIS_t = σ^AIS^2
[ 1 0; 0 1 ]
R^cv_t = σ^cv_t^2
[ 1 0; 0 1 ].
In a maritime environment, only uncertainties in the observations of radars and cameras depend on the distance between these observers and their targets. Thus, in Eq. (<ref>), σ^rv_t and σ^cv_t are positive proportional to the distance from radar r or camera c to target v at time step t while σ^AIS is assumed to be constant (because it is GPS-based).
§ PROBLEM FORMULATION
To assign targets to cameras and to plan trajectories for these cameras (possibly over a long time horizon), we have to estimate the targets' positions using the observations collected by radar r, AIS and cameras.
Furthermore, in this paper, we consider that the overall maritime situational awareness is achieved (albeit temporarily) once all targets are observed at least once[This is merely a practical consideration. As we will demonstrate, our approach can handle unbounded scenarios.].
Here, we define that a target is observed by a camera if it resides within the camera' detection range and FOV, and the mean squared error (MSE) between its estimated value and ground truth is equal to or less than a pre-given threshold.
Because a target v could be be tracked by radar r, AIS and camera c, by building on <cit.>, the total MSE of |𝒱| targets is calculated over a time horizon H as follows:
J_H ≈∑_t=t+0^t+H-1∑_v∈𝒱Tr(P^v_t+1)
= ∑_t=t+0^t+H-1∑_v∈𝒱Tr( 1/1/P^rv_t+1+1/P^AISv_t+1+1/P^cv_t+1),
where t=t+0 is the current time step; and Tr(P^v_t+1) is the trace of matrix P^v_t+1. To compute P^rv_t+1, P^AISv_t+1 and P^cv_t+1, we have
P^rv_t+1=
1/1/P^rv_t+1|t+S^rv_t+1 if v is observed by r,
P^rv_t+1|t otherwise
P^AISv_t+1=
1/1/P^AISv_t+1|t+S^AISv_t+1 if v is observed by AIS,
P^AISv_t+1|t otherwise
P^cv_t+1=
1/1/P^cv_t+1|t+S^cv_t+1 if v is observed by c,
P^cv_t+1|t otherwise,
where P^rv_t+1|t=F_v ·P^rv_t·F^T_v + Q^v_t, P^AISv_t+1|t=F_v ·P^AISv_t·F^T_v + Q^v_t and P^cv_t+1|t=F_v ·P^cv_t·F^T_v + Q^v_t. Additionally, S^rv_t+1=H^r_t+1^T ·R^rv_t+1·H^r_t+1, S^AISv_t+1=H^AIS_t+1^T ·R^AIS_t+1·H^AIS_t+1 and S^cv_t+1=H^c_t+1^T ·R^cv_t+1·H^c_t+1. We assume that target v will be observed by radar r, AIS and camera c at time step t+1 if target v's nominal mean positions defined by ξ^rv_t+1, ξ^AISv_t+1 and ξ^cv_t+1 are in the coverage area of radar r and AIS, and the FOV of camera c, respectively. Similar to <cit.>, we calculate ξ^rv_t+1, ξ^AISv_t+1 and ξ^cv_t+1, respectively, as follows:
ξ^rv_t+1 = F_v ·ξ^rv_t,
ξ^AISv_t+1 = F_v ·ξ^AISv_t,
ξ^cv_t+1 = F_v ·ξ^cv_t.
When t=t+0, (ξ^rv_t, P^rv_t), (ξ^AISv_t,P^AISv_t) and (ξ^cv_t,P^cv_t) are the state update and covariance update of target v tracked by radar r, AIS and camera c, respectively. In addition, we calculate (ξ^rv_t, P^rv_t), (ξ^AISv_t,P^AISv_t) and (ξ^cv_t,P^cv_t) using Kalman filter and joint probabilistic data association. Therefore, we formulate the optimization problem of multi-sensor allocation and path planning over a time horizon H in the following.
where b_t is the nominal belief state at time step t. To calculate the nominal belief states of a target v ∈𝒱 (b^𝒱_v_t+1,…,b^𝒱_v_H-t-1), we employ the target's track ξ^v_t and P^v_t which are obtained using Kalman filter method and joint probabilistic data association. Then, we have b^𝒱_v_t ∼ N(ξ^v_t;P^v_t) and ξ^v_t+1=F_v ·ξ^v_t. Additionally, P^v_t+1 is computed as follows <cit.>:
P^v_t+1=
1/1/P^v_t+1|t+S^v_t+1 if v is observed,
P^v_t+1|t otherwise.
Therefore, from Section <ref>, we rewrite the optimization problem of multi-sensor allocation and path planning over a time horizon H as follows:
The work of <cit.> relies on to propose a framework which determines the true world states using previous actions and observations via partially .
Thus, we would formulate the camera allocation and trajectory planning problem according to this framework.
§.§ Overview of the Framework in <cit.>
A POMDP comprises six different components, namely 𝒳, 𝒜, 𝒪, 𝒯, 𝒵 and 𝒞. Here, 𝒳 is used to represent the possible states of both mobile sensors and targets while 𝒜 is a set of the mobile sensors' actions. In addition, 𝒪 is a set of observations. 𝒯 is the state-transition function that defines how the system moves from one state to another. 𝒵 is the observation likelihood function that relates the observations to the states and actions. Finally, 𝒞 represents the cost function that specifies the benefits of each action. The objective of a POMDP is to identify a policy where the lowest expected cost is achieved. Here, the policy is determined by selecting an action at each time step.
According to <cit.> and <cit.>, we have
* State: The POMDP's state at time step t defined by χ_t includes the states of cameras χ^𝒞_t, the states of target vessels χ^𝒱_t, and the states of filter ℱ_t (χ_t=(χ^𝒞_t,χ^𝒱_t,ℱ_t)). Here, only the camera states and the filter states are fully observable. In addition, the filter states ℱ_t contain tracks of |𝒱| targets; and it is represented by Gaussian distribution with mean ξ_t and covariance P_t (ℱ_t=(ξ_t,P_t)).
* Actions: The cameras' action at time step t defined by a_t consists speeds and moving angles of |𝒞| cameras on the sea surface. Similar to <cit.>, the movement of cameras in the maritime environment is modelled as follows:
x^c_t+1 = x^c_t + s^c_t ·cos(ϕ^c_t) ·Δ t
y^c_t+1 = y^c_t + s^c_t ·sin(ϕ^c_t) ·Δ t
,
where x^c_t and y^c_t are positions of camera c in a 2-D coordinate system at time step t. Additionally, s^c_t and ϕ^c_t is the speed and the moving angle of camera c at time step t.
* Observations and Observation Law: The observations of the targets' positions at time step t are given by Eq. (<ref>).
* State Transition: The camera state, first, evolves according to a camera motion model. Then, the evolution of the target state is determined using Eq. (<ref>). Finally, the tracker state follows Kalman filter equations and joint probabilistic data association.
* Cost function: With the goal that is to minimize the mean-squared error (MSE) between the estimated and true positions of targets, the cost function is given by:
C(χ_t,a_t) = [‖χ^𝒱_t+1 - ξ_t+1‖ ^2|χ_t,a_t],
where […|χ_t,a_t] is the conditional expectation if action a_t is performed in state χ_t at time step t.
* Belief state: The belief state at time step t is given by b_t=(b^𝒞_t,b^𝒱_t,b^ξ_t,b^P_t) where b^𝒞_t=δ(χ^𝒞-χ^𝒞_t), b^ξ_t=δ(ξ-ξ_t) and b^P_t=δ(P-P_t). In addition, b^𝒱_t is the posterior distribution of target belief state.
Consequently, we write the objective function over a time horizon H in the belief states as follows:
J_H(b_0) = [∑_t=0^H-1c(χ_t,a_t)|b_0],
where c(χ_t,a_t) = ∫C(χ,a_t) · b_t(χ) dχ. According to the optimality of Bellman's principle, we determine the optimal policy at time step t=0 as follows: π_0^⋆(b_0) = _a{c(b_0,a) + [J^⋆_H-1(b_1)|b_0,a]}
where a is an action, J^⋆_H-1(b_1) is the optimal cumulative cost within H-1 time steps, and J^⋆_H-1(b_1) = min_a{c(b_1,a) + [J^⋆_H-2(b_2)|b_1,a] }. Therefore, the optimal policy at time step t is expressed as:
π_t^⋆(b_t) = _a{c(b_t,a) + [J^⋆_H-1-t(b_t+1)|b_t,a]}.
Due to the complexity of calculating [J^⋆_H-1-t(b_t+1)|b_t,a] in Eq. (<ref>), we employ an approximation approach, Nominal Belief-State Optimization (NBO). Specifically, the objective function in Eq. (<ref>) is approximated as
J_H(b_0) ≈∑_t=0^H-1c(b_t,a_t),
where b_t is the nominal belief state at time step t. To calculate the nominal belief states of a target v ∈𝒱 (b^𝒱_v_t+1,…,b^𝒱_v_H-t-1), we employ the target's track ξ^v_t and P^v_t which are obtained using Kalman filter method and joint probabilistic data association. Then, we have b^𝒱_v_t ∼ N(ξ^v_t;P^v_t) and ξ^v_t+1=F_v ·ξ^v_t. Additionally, P^v_t+1 is computed as follows <cit.>:
P^v_t+1=
1/1/P^v_t+1|t+S^v_t+1 if v is observed,
P^v_t+1|t otherwise.
0
§.§ Problem Formulation
Since target v in this paper is tracked by radar r, AIS and camera c ∈𝒞, we have ξ^rv_t+1=F_v ·ξ^rv_t, ξ^AISv_t+1=F_v ·ξ^AISv_t and ξ^cv_t+1=F_v ·ξ^cv_t, and
P^rv_t+1=
1/1/P^rv_t+1|t+S^rv_t+1 if v is observed by r,
P^rv_t+1|t otherwise
P^AISv_t+1=
1/1/P^AISv_t+1|t+S^AISv_t+1 if v is observed by AIS,
P^AISv_t+1|t otherwise
P^cv_t+1=
1/1/P^cv_t+1|t+S^cv_t+1 if v is observed by c,
P^cv_t+1|t otherwise,
where P^rv_t+1|t=F_v ·P^rv_t·F^T_v + Q^v_t, P^AISv_t+1|t=F_v ·P^AISv_t·F^T_v + Q^v_t and P^cv_t+1|t=F_v ·P^cv_t·F^T_v + Q^v_t. Additionally, S^rv_t+1=H^r_t+1^T ·R^rv_t+1·H^r_t+1, S^AISv_t+1=H^AIS_t+1^T ·R^AIS_t+1·H^AIS_t+1 and S^cv_t+1=H^c_t+1^T ·R^cv_t+1·H^c_t+1. We assume that measurements would be available at time step t+1 if the nominal mean position of target v at time step t+1, i.e., ξ^rv_t+1, ξ^AISv_t+1 and ξ^cv_t+1, are in the coverage area of radar r and AIS, and the FOV of camera c, respectively.
As a result, the cost function in Eq. (<ref>) is rewritten as c(b_t,a_t) = ∑_v∈𝒱Tr(P^v_t+1) with P^v_t+1=1/1/P^rv_t+1+1/P^AISv_t+1+1/P^cv_t+1.
In this paper, we consider that maritime situational awareness would be improved when all targets are observed by cameras. Furthermore, we define that a target is observed by a camera if it resides within the camera' detection range and FOV, and the MSE between its estimated value and ground truth is equal to or less than a pre-given threshold. Therefore, from Section <ref>, we rewrite the optimization problem of multi-sensor allocation and path planning over a time horizon H as follows:
min_x^cv_t,s^c_t,ϕ^c_t
∑_t=t+0^t+H-1{∑_v∈𝒱 [ (Tr (P^v_t+1) - ϵ)
+α_1∑_c∈𝒞 x^cv_t(D_min-d^cv_t+1)(d^cv_t+1-D_max)
+α_2 ∑_c∈𝒞(D_safe-d^cv_t+1)
]
+α_3∑_c∈𝒞∑_c'∈𝒞,c'≠c (D_safe-d^cc'_t+1)
}
s.t.
s_min ≤s^c_t ≤s_max
∀c ∈𝒞; t=t+0,…,t+H-1
0 ≤|ϕ^c_t-ϕ^c_t-1| ≤Φ_max
∀c ∈𝒞; t=t+0,…,t+H-1
∑_c ∈𝒞 x^cv_t ≤1
∀v ∈𝒱; t=t+0,…,t+H-1
∑_v ∈𝒱 x^cv_t = 1
∀c ∈𝒞; t=t+0,…,t+H-1
with
P^v_t+1=1/1/𝐏^rv_t+1+1/𝐏^AISv_t+1+∑_c∈𝒞x^cv_t/𝐏^cv_t+1,
where Tr(P^v_t+1) is the trace of matrix P^v_t+1; and ϵ is the MSE threshold. We use x^cv_t for assigning a target v to a camera c. If x^cv_t=1, target v is assigned to camera c. Otherwise, target v is not assigned to camera c. Moreover, s^c_t and ϕ^c_t are speed and moving angle of camera c, respectively, at time step t.
Different from <cit.>, the objective function in (<ref>) includes the following four terms: 1) MSE, 2) the distance between the position of camera c and the nominal mean position of target v assigned to c, 3) the distance between the nominal mean position of target v and the position of all cameras, and 4) the distance between a camera and another cameras at time step t+1. Here, the second term is employed to maintain that camera c has to move forward until its target v resides within its FOV if c wants to observe v. Then, because cameras and targets move on the same surface (the surface of a sea), we use the third and fourth terms to guarantee a safety distance between cameras and targets. In addition, we have
Tr(P^v_t+1) - ϵ =
0 if Tr(P^v_t+1) ≤ϵ,
Tr(P^v_t+1) - ϵ otherwise
D_min-d^cv_t+1 =
1 if d^cv_t+1≥ D_min,
D_min-d^cv_t+1 otherwise
d^cv_t+1 - D_max =
1 if d^cv_t+1≤ D_max,
d^cv_t+1 - D_max otherwise
D_safe - d^cv_t+1 =
0 if d^cv_t+1≥ D_safe,
D_safe - d^cv_t+1 otherwise
D_safe - d^cc'_t+1=
0 if d^cc'_t+1≥ D_safe,
D_safe - d^cc'_t+1 otherwise
where the values of D_min, D_max and D_safe are pre-given; and α_1, α_2 and α_3 are scaling factors (α_1,α_2,α_3 ∈ [0;1]). Additionally, d^cv_t+1 is the distance between the position of camera c and the nominal mean position ξ^cv_t+1 of target v while d^cc'_t+1 is the distance between camera c and camera c' at time step t+1. Given s^c_t and ϕ^c_t, the position of camera c at time step t+1 is determined using the following movement model, which is applicable to maritime situations <cit.>.
x^c_t+1 = x^c_t + s^c_t ·cos(ϕ^c_t) ·Δ t
y^c_t+1 = y^c_t + s^c_t ·sin(ϕ^c_t) ·Δ t
,
where x^c_t and y^c_t are positions of camera c in a 2-D coordinate system (the sea surface) at time step t.
Conditions (<ref>) and (<ref>) describe the range for speed and moving
angle of each mobile camera c. Condition (<ref>)
shows that one target must not be assigned to more than one camera at each time step because the number of cameras is inadequate. Moreover, condition (<ref>) defines
that a camera only approaches a target until the target resides within the camera's FOV. This constraint mimics human observer who focuses on one target at a time.
§ PROPOSED SENSOR ALLOCATION AND PATH PLANNING APPROACH
It is hard to solve the problem (<ref>) because of its complexity and its large search space, i.e., [(|𝒞|×|𝒱|)×(|ℒ_s|×|ℒ_m|)^|𝒞|]^H where |ℒ_s| and |ℒ_m| are the number of speed levels and heading angles of each camera. To this end, we design two separate algorithms: a sensor allocation algorithm and a distributed multi-agent path planning algorithm. Based on a greedy algorithm, the former distributes cameras to targets to reduce the duration of sensing all the targets. Moreover, in the latter, each camera is considered as an agent in a multi-agent system. These agents individually plan their trajectory using an online-learning method called regret-matching-learning (RML). We summarize the proposed camera allocation and path planning approach in Fig. <ref>. Furthermore, we present the pseudo-code of the proposed sensor allocation and RML-based distributed path planning approach in Alg. <ref>.
§.§ Multi-Sensor Allocation Scheme
Due to the limited number of mobile cameras, these cameras have to be distributed to targets at the current time step t=t+0 before planning their trajectory. Unlike <cit.>, we introduce the following metric as a criteria of allocating a camera to a target at time step t=t+0.
β^cv_t=Δτ^v_t/d^cv_t ∀ c ∈𝒞
∀ v ∈𝒱,
where Δτ^v_t describes how long target v has not been observed by any cameras until time step t. If target v is observed at a time step t, Δτ^v_t will be reset to 0. Otherwise, it increases by one time step. Moreover, d^cv_t is the distance between camera c and target v at time step t=t+0. Here, we consider the location of target v at time step t=t+0 as ξ^cv_t, which is computed using Kalman filter and joint probabilistic data association. Consequently, a |𝒞|-by-|𝒱| matrix M_t with β^cv_t as an element is computed as follows:
M_t =
[ β^11_t β^1|𝒱|_t; ⋮ ⋱ ⋮; β^|𝒞|1_t β^|𝒞||𝒱|_t ].
Using the matrix M_t, the 3-step process where we allocate a camera to a target is summarized in the following.
* Step 1.1: We will assign a camera c to a target v (x^cv_t=1) if their β^cv_t is the maximum element in M_t.
* Step 1.2: The other elements β^cv_t relating to this camera c and this target v are removed from M_t.
* Step 1.3: If all |𝒞| cameras are allocated to targets,
we stop the process. Otherwise, we go back to Step 1.1.
In the proposed sensor allocation scheme, the sensor allocation depends not only on the distance between observers and their targets but also on the duration for which the targets have not been looked at by any camera. Therefore, this proposed scheme is helpful for reducing the duration of observing all the targets at least once.
§.§ RML-based Distributed Multi-Agent Path Planning Scheme
After targets are assigned to cameras, we need to plan trajectories for the cameras to move forward to their targets. This trajectory planning problem is reformulated as a multi-agent distributed problem. Here, we consider each mobile camera as an agent which is an independent decision maker. As soon as these agents perform an action, they will update the other agents with their decisions[Assume that there is a network where the agents are able to communicate or exchange their information. Here, the communication protocol is out of the paper's scope.].
Relying on the updated actions, the agents will learn to achieve an acceptable solution together.
In this paper, planning paths for multiple cameras at time step t is modelled as an iterative game 𝒢_t where the players will play actions to gain their own long-run average benefit. The iterative game 𝒢_t consists of a finite set of players 𝒞 as which cameras are regarded, a set of actions 𝒜_t and a set of utility functions of players 𝒰_t (𝒢_t=(𝒞,𝒜_t,𝒰_t)). We denote by 𝒜_t=𝒜_t,1×𝒜_t,2×⋯×𝒜_t,T the set of all players' actions over T iterations. Additionally, 𝒜_t,τ=𝒜^1_t,τ×𝒜^2_t,τ×⋯×𝒜^|𝒞|_t,τ with τ=1,2,…,T. Each player c ∈𝒞 at iteration τ of time step t has a finite set of actions 𝒜^c_t,τ which consists of several levels of speed and moving angle. The number of speed levels and moving angles of |𝒞| cameras are the same while they do not vary over time. We denote by 𝒰_t=𝒰_t,1×𝒰_t,2×⋯×𝒰_t,T the set of utility functions at time step t. Here, let 𝒰_t,τ ={u^1_t,τ,u^2_t,τ,…,u^|𝒞|_t,τ} define the set of utility functions of |𝒞| cameras at iteration τ of time step t.
We denote by a^c_t,τ the action which camera c performs at iteration τ of time step t (a^c_t,τ∈𝒜^c_t,τ). Here, a speed level s^c_t,τ and a moving angle ϕ^c_t,τ of camera c are included in an action a^c_t,τ (a^c_t,τ=(s^c_t,τ,ϕ^c_t,τ)). According to the objective function in problem (<ref>), when each camera c plays action a^c_t,τ, the utility function of camera c is given by:
u^c_t,τ(a^c_t,τ,a^c_t,τ) =
-[(Tr(P^v_t+1) - ϵ)
+α_1 (D_min-d̂^cv_t+1)(d̂^cv_t+1-D_max)
+α_2 ∑_v'∈𝒱(D_safe-d̂^cv'_t+1)
+α_3∑_c∈𝒞∑_c'∈𝒞,c'≠ c (D_safe - d^cc'_t+1 )],
where v is the target assigned to camera c using the sensor allocation process in section <ref>. Moreover, we denote by a^c_t,τ the actions which the other cameras 𝒞∖{c} play at iteration τ of time step t. At each iteration τ, given the other agents' actions a^c_t,τ, the agent c
picks an action a^c_t,τ until it achieves a maximum utility. Maximizing the sum of |𝒞| agents' utilities, we will achieve the minimum value of the objective function in problem (<ref>).
According to <cit.>, a game-based solution will converge at a set of equilibria where each player cannot
improve its utility by changing its decision. Moreover, the equilibrium of our iterative game 𝒢_t is a correlated equilibrium (CE). We define by π^c_t,τ a probability distribution from which an agent c selects an action a^c_t,τ in an action set 𝒜^c_t,τ. The probability distribution π^c_t,τ will be a CE if it holds true that
∑_a^c_t,τ∈𝒜^c_t,τπ^c_t,τ(a^c_t,τ,a^c_t,τ)
[u^c_t,τ(a'^c_t,τ,a^c_t,τ) - u^c_t,τ(a^c_t,τ,a^c_t,τ) ] ≤ 0,
for all a^c_t,τ∈𝒜^c_t,τ and for every action pair a^c_t,τ,a'^c_t,τ∈𝒜^c_t,τ. In a CE, each agent does not gain any benefits by choosing the other actions according to the probability distribution π^c_t,τ if the other agents do not change their decisions.
In the proposed distributed multi-agent path planning algorithm, we employ a learning process, regret-matching procedure. According to this procedure, each agent is allowed to individually compute the probability of its action which is proportional to the regrets for not selecting the other actions. Specifically, at an iteration τ of time step t, for any two actions j,k ∈𝒜^c_t,τ and j ≠ k, the cumulative regret of an agent c for not selecting action k instead of action j up to iteration τ is given by:
D^c_t,τ(j,k)=1/τ∑_τ=1^τ1(a^c_t,τ=j){
u^c_t,τ(k,a^c_t,τ)-u^c_t,τ(j,a^c_t,τ̂)},
where 1(.) is an indicator function. If the value of the regret is negative, the agent c will not regret when playing action j instead of action k. Here, we can express the cumulative regret recursively as follows:
D^c_t,τ(j,k)=(1-1/τ)D^c_t,τ-1(j,k)+1/τD^c_t,τ(j,k),
where D^c_t,τ(j,k)= 1(a^c_t,τ=j){
u^c_t,τ(k,a^c_t,τ)-u^c_t,τ(j,a^c_t,τ)} defines the instantaneous regret when agent c has not played action k instead of action j. Through Eq. (<ref>), it is noteworthy that there is a significant influence of the outdated regret D^c_t,τ-1(j,k) on the instantaneous regret D^c_t,τ(j,k). It causes a slow convergence speed for our proposed path planning algorithm. Meanwhile, due to a high mobility environment like maritime environment, our proposed path planning algorithm is required to converge at a CE with a high speed. Therefore, like <cit.>, we employ a forgetting factor for updating D^c_t,τ(j,k) in the following.
D^c_t,τ(j,k)=λ·D^c_t,τ-1(j,k)+(1-λ)D^c_t,τ(j,k),
where λ∈ [0;1] is the forgetting factor. Here, the influence of outdated regret values over the instantaneous regret will be regulated by λ. In addition, the forgetting factor's advantages have been demonstrated in our previous work <cit.>. As a result, the probability that agent c plays an action k in the next iteration at time step t is computed by:
π^c_t,τ+1(k) = 1/μmax{D^c_t,τ(j,k),0} ∀ k ≠ j,
where μ is a normalizing factor to ensure that the probability π^c_t,τ+1 sums to 1. Then, the probability that action j is chosen by agent c in the next iteration is given by:
π^c_t,τ+1(j)=1-∑_k≠ j, k ∈𝒜^c_t,τπ^c_t,τ+1(k).
§ COMPUTATIONAL STUDY
§.§ Experiment settings
To evaluate the performance and efficiency of our proposed sensor allocation and path planning approach, we conduct computer-based experiments with practical settings. In the experiments, one static radar and AIS always update us with their observations of target positions at the end of each time step. The radar is installed at the position of (95 km,-95 km). Unlike the radar, cameras are able to move around to observe targets, and they are required to observe these targets once. The camera positions are initially set as (0,0) while the positions of targets are randomly distributed between 15 km and 30 km away from the origin. In addition, the targets do not reside within any cameras' FOV. The camera FOV is illustrated in Fig. <ref>. Similar to <cit.>, the uncertainty in measurements of radar r and camera c is given by σ^rv_t=d^rv_t ·p^r/100 and σ^cv_t=d^cv_t ·p^c/100, respectively. Here, d^rv_t and d^cv_t are the distance between the sensors and target v at time step t, respectively. The values of all parameters in our experiments are based on <cit.>, and they are listed in Table <ref>.
To demonstrate how our proposed approach works, we compare our approach with the most relevant work in literature. Specifically, the related work <cit.> has been recently published in 2022, and it aims to minimize the total MSE. In that
work, the targets leave the limited FOV of sensors amounted on UAVs at the end of each time step. Thus, when the targets are outside the sensors' FOV, a target whose MSE is the greatest will be assigned to the nearest UAV. Then, using a greedy algorithm, the work of <cit.> generates the trajectories for multiple UAVs sequentially in a centralized manner to guarantee that each UAV covers the nominal mean position of the assigned target. Here, we re-implement the work of <cit.> in our maritime scenario without any changes.
Our implementation of our algorithm and <cit.> are using only one CPU core. The experiments are run on MacBook Pro with 3.1 GHz Dual-Core Intel Core i5.
We evaluate the performance of the two approaches in terms of achieving situational awareness as quickly as possible, and using the
following two metrics.
* Mean squared error (MSE): which represents situational awareness improvement, and is determined through Tr(P^v_t+1) in the objective function (<ref>),
* Observation duration: which is computed from when we start conducting an experiment to when the final target in the experiment is observed.
Furthermore, to fairly compare the two works, we always set up the same scenario with the same parameter settings (including the same random number generator seed) when running experiments.
§.§ Results
In Figs. <ref> and <ref>, we show for two scenarios the trajectories produced by our approach and contrast the results with those achieved by the related work <cit.>: 2 cameras aim to observe 3 targets in the first scenario, and 3 cameras aim to observe 5 targets in the second scenario. We can see that our approach achieves situational awareness (as measured by the time needed to observe all targets at least once) much quicker: 6,270 s and 6,403 s versus 9,034 s and 7,387 s.
That is because the our camera allocation depends on not only the distance between the cameras and the targets but also how long the targets stay unobserved. In addition, our distributed path planning algorithm guarantees a solution, which is a correlated equilibirum solution
for all the cameras. Figs. <ref> and <ref> complement the observation by focusing on the MSE: they show that both the approaches are able to enhance situational awareness using the cameras. Initially, due to a high uncertainty in observations from radar r and AIS, the MSE of the two approaches remains high when we only rely on these observations. By contrast, the MSE is reduced to a pre-given threshold level when the targets reside within the cameras' FOV and are perceived by the cameras. That is because the cameras are able to provide us measurements with a lower uncertainty.
Exemplarily, we show in Fig. <ref> the “continuous observation”, i.e. we do not stop the experiment once all targets are observed at least once. As a result, the MSE of each target is reduced to the threshold level many times, as shown in Fig. <ref>b.
Table <ref> presents how our proposed approach and <cit.> perform in different scenarios where we vary the camera number |𝒞|, the target number |𝒱| and the time horizon length H. While <cit.> considered the six smallest scenarios, i.e. from
<|𝒞|=2,|𝒱|=3,H=1> to <|𝒞|=3,|𝒱|=5,H=5>, we extend the scenarios by several orders of magnitude.
In all these scenarios, our proposed work observes all targets more quickly than <cit.>. In our proposed approach, the
trajectories for all cameras are generated as soon as the proposed RML-based distributed path planning algorithm converges at an equilibrium solution. This results in a per-step execution time that is longer than that of <cit.>. It is noteworthy that the execution time is extended when we plan trajectories over a longer time horizon, i.e., 5 time steps. Nevertheless the proposed work's running time is not too long, e.g., 1.421 s when we plan paths for 30 cameras over a 1-time-step horizon to observe 50 targets. It demonstrates that our proposed work is not computationally expensive, and it can be applicable to real scenarios. In particular, although the number of cameras and the number of targets increase to 30 and 100, respectively, the observation duration of our proposed approach is still lower than that of <cit.> while its running time is 1.863 s and 9.387 s.
§ CONCLUSION
In this paper, we have shown a sensor allocation and path planning approach to enhance the awareness of multiple targets in maritime situations. Here, the number of available sensors is lower than the number of targets while their FOV and detection range are limited. An optimization problem is formulated to distribute multiple sensors and plan paths for these sensors over a time horizon. Due to the complexity and large search space of this problem, we cope with it by developing two separate algorithms: a sensor allocation algorithm and a distributed path planning algorithm. The former relies on the sensor-target distance and the duration that targets have not been observed to allocate the sensors. The latter allows each camera to individually plan its trajectory to observe the targets. Our experimental results show that our proposed work is efficient to minimize the duration to achieve situational awareness improvement under several different scenarios.
§ ACKNOWLEDGEMENT
This project is supported by the Australian Research Council (ARC) project number LP200200881, Safran Electronics and Defense Australasia, and Safran Electronics and Defense.
IEEEtran
|
http://arxiv.org/abs/2307.00650v1
|
20230702200244
|
Noisy Prediction-Based Control Leading to Stability Switch
|
[
"Elena Braverman",
"Alexandra Rodkina"
] |
math.DS
|
[
"math.DS",
"math.PR",
"39A50, 37H10, 39A30, 37H30"
] |
1]E. Braverman^3
2] A. Rodkina
[1]organization=University of Calgary, city=Calgary, postcode=T2N 1N4, state=Alberta, country=Canada
[2]organization=The University of the West Indies, Mona Campus, city=Kingston, country=Jamaica
Applying Prediction-Based Control (PBC) x_n+1=(1-α_n)f(x_n)+α_n x_n
with stochastically perturbed control coefficient α_n=α+ℓξ_n+1, n∈ℕ, where ξ are bounded identically distributed independent random variables, we globally stabilize the unique equilibrium K of the equation
x_n+1=f(x_n)
in a certain domain.
In our results, the noisy control α+ℓξ provides both local and global stability,
while the mean value α of the control does not guarantee global stability, for example, the deterministic controlled system can have a stable two-cycle, and non-controlled map be chaotic. In the case of unimodal f with a negative Schwarzian derivative, we get sharp
stability results generalizing Singer's famous statement `local stability implies global' to the case of the stochastic control.
New global stability results are also obtained in the deterministic settings for variable α_n and, generally, continuous but not differentiable at K map f.
Stochastic difference equations Prediction-Based Control global stability sharp stability conditions negative Schwarzian derivative noise-induced stability
[2008] 39A50 37H10 39A30 37H30
§ INTRODUCTION
[3]Corresponding author, e-mails [email protected]; [email protected]
We consider the map
x_n+1=f(x_n), x_0>0,
where f:[0, ∞)→ [0, ∞) is a continuous function with one positive unstable equilibrium K,
f(x)>x for x∈ (0, K), 0<f(x)<x for x∈ (K, ∞).
Non-negativity of x_n is assumed following a long tradition of population dynamics models, and the equilibrium K of f is unstable; moreover, f can exhibit chaotic behaviour for maps such as Ricker, logistic, and others.
Various methods were applied to alleviate chaotic behaviour,
some of them combined the current and the past values of x_n.
In contrast to this approach, Prediction-Based Control (PBC) proposed by Ushio and Yamamoto <cit.> computed the weighted average between the state variable x and some iterate of the map f^k(x)
(a predicted, or a potential
future value)
x_n+1=(1-α)f^k(x_n)+α x_n,
which in the simplest case k=1 is
x_n+1=(1-α)f(x_n)+α x_n, x_0>0, α∈ [0,1).
PBC was proved to be an efficient stabilization tool <cit.>. Moreover, some modification was considered recently in <cit.>.
While generally for parameter-based stabilization, increasing α does not lead to stability of the unique positive equilibrium point K, i.e. for a stabilizing β_*, instability can be observed for some α∈ (β_*,1), there is a
critical β_*, such that for any α∈ (β_*,1),
K is a globally stable equilibrium of (<ref>).
Consider the case when f is a three times differentiable unimodal function with two equilibrium points zero and K, a unique critical point c∈ (0,K) (maximum), f'(0)>1, f”(x)<0 for all x ∈ (0,c) and a Schwarzian derivative
(Sf)(x) = f”'(x)/f'(x) - 3/2( f”(x)/f'(x))^2,
which is negative on (0,∞) excluding the unique critical point c. Under these conditions, local stability of K implies global stability, and once f'(K) < -1, the sharp stabilizing constant α_0 := -f'(K) - 1/-f'(K)+1 determines the minimal value leading to stabilization <cit.>. The idea goes back to <cit.>. As local stability guaranteed by the fact that the derivative of the right-hand side of (<ref>) at K which is a weighted average of f'(K) and one exceeds -1, is easily established, this leads to the fact that for any α∈ (α_0,1), K is a globally stable equilibrium of (<ref>), see <cit.>.
The main result of the paper is that this constant is no longer sharp in the stochastic case and can be improved when the control is perturbed by noise, which is rigorously justified for a symmetric continuous or discrete distribution.
Our main goal is to stabilize globally the equilibrium K applying PBC with stochastically perturbed variable control coefficient α_n=α+ℓξ_n+1, n∈ℕ, where ξ are bounded identically distributed independent random variables,
x_n+1=(1-α- ℓξ_n+1)f(x_n)+(α+ℓξ_n+1)x_n, x_0>0.
Once f satisfies some smoothness criterion being at least one-sided Lipschitz continuous at K, such control always exists even with ℓ=0. However, our purpose is to find the smallest possible value of the parameter α and the range for the noise level ℓ which provide global stability of K with probability one (almost surely).
This is a two-parameter (α,ℓ)-problem, also dependent on the choice of the distribution for ξ.
A bound is constructed for a couple (α,ℓ) guaranteeing convergence of a solution to
the unique positive equilibrium. For a map with negative Schwarzian derivative (<ref>), we get a sharp stabilization criterion which is, unlike most stochastic results, global. It also clearly illustrates that the range of α includes smaller control values than in the deterministic case <cit.>.
Let us notice that local stabilization of an unstable equilibrium is possible with noise only.
However, without any other control, for a chaotic map the attracting neighbourhood of K can be very small, less than 10^-9 <cit.>.
Most relevant results on local stability of the stochastic difference equation compared to the present paper can be found in <cit.>, where unbounded noise ξ and continuously differentiable f were considered (see also <cit.>). This approach is due to Khasminskii <cit.> and H. Kesten <cit.>;
it was used later in many publications. Our main theorems are based on the Kolmogorov's Law of Large Numbers which is used to compute the values of ℓ and α ensuring local stability with any given probability,
and on a corollary of the Borell-Cantelli Lemma, guaranteeing that
a solution eventually enters however small, but prescribed in advance,
neighbourhood of K, leading to global stability with the probability one.
Stabilizing effect of noise attracted a lot of attention recently due to its significant role in sustaining
healthy neuronal activities and avoid sustainable oscillations <cit.>. In contrast to <cit.>, we consider bounded, not Gaussian noise, which is assumed to be more realistic in biological and health-related systems.
The main result of the paper proves global asymptotic stability of the solution to (<ref>) under the assumption that there is local stochastic asymptotic stability of K with probability not less than 1-γ in the interval (K-δ, K+δ) for the initial value, where δ=δ(γ) depends on γ∈ (0, 1) chosen arbitrarily. The control parameters α and ℓ have also to satisfy α+ℓ>β_*, α<β_*, where β_* is a control level ensuring global asymptotic stability for deterministic equation (<ref>).
The control α+ℓξ provides both local and global stability,
while the mean value α does not necessarily guarantee global (and maybe even local) stability of K.
Continuity of f and the fact that (f(x)-K)(x-K)<0 on (0,K) ∪ (K,∞) allow us to deal with a smaller interval [f^2_m, f_m] around K, where f_m is a maximum of f on [0,K], and f^2_m is a minimum of f on [K, f_m]. Then,
the solution to (<ref>) reaches [f^2_m, f_m] in a finite number S_0 of steps and remains there. Keeping a control level greater than β_* for some prescribed finite number of steps allows a solution x to get from [f^2_m, f_m] into the initial interval (K-δ, K+δ), from where x converges to K. Applying Borel-Cantelli lemma (see Lemma <ref>), we conclude that there exists a random moment 𝒩
such that the required control intensity occurs for a certain prescribed number of steps in a row, which implies global stability.
Generally speaking, the function f is not assumed to be continuously differentiable at K, which allows to distinguish between the left-side Lipschitz-type constant L^- and the right-side L^+ at K, and find a control parameter for local, as well as global stability, using L^- and L^+, instead of their maximum. The simplest result about local stability in the deterministic setting is obtained when
α∈(α_0, 1), α_0:=L^+L^–1/(L^++1)(L^-+1), which, to the best of our knowledge, is a new result. For L^-=L^+=L, a well-known condition of α> (L-1)/(L+1) follows, see e.g. <cit.>.
When local stability is due to the stochastic control, the proof applies the Kolmogorov's Law of Large Numbers (see Lemma <ref>), once
𝔼ln([L^–(α+ℓξ))(L^-+1)] [L^+-(α+ℓξ))(L^++1)] )<0
is satisfied. For some distributions of ξ, we show that when ℓ>0, condition (<ref>) gives smaller value of α than α_0. Notice that if f is a unimodal function satisfying the conditions of <cit.> elaborated above, any parameter α> α_0:=-f'(K)-1/-f'(K)+1 provides local, as well as global stability of deterministic equation (<ref>). We calculate the values of ℓ for certain types of noise ξ such that the global stability for (<ref>) holds with some
α < α_0.
Even though there is always β_* which guarantees global stability for (<ref>), we are interested in the smallest possible one. In many population biology models, like chaotic Ricker and logistic, using just the left-hand-side L^- and the right-hand-side L^+ global Lipschitz-type constants gives much bigger value than
α_0:=-f'(K)-1/-f'(K)+1, as L^+L^–1/(L^++1)(L^-+1)> α_0. It appeared advantageous to split [f^2_m, K] into several subintervals and pair each one with its image by the map
(1- α_0)f(x)- α_0, calculating L^-_i and L^+_i, Lipschitz constants for the left and the right intervals.
The point is that, while the left constants are very large, low right constants alleviate for them.
For Ricker and bobwhite quail <cit.> models, L^-_i and L^+_i compensate each others: when the left one is getting bigger, the right one is getting smaller, so the expression L^-_iL^+_i-1/(L^+_i+1)(L^-_i+1) which can be used for calculating β_*, remains less than a control value α_0 computed without this splitting.
Our approach to the proof of global stability for the deterministic equation is based on
the results of <cit.>, see also <cit.>.
As discovered in <cit.>, a unique equilibrium of f is globally stable if and only if f^2 has no two-cycles. We cite this statement as in <cit.>.
Let g:[a,b]→ [a,b] be continuous, then its fixed point x^*
is globally asymptotically stable relative to [a,b] if and only if g^2( x)>x, x<x^* and g^2(x)<x, x>x^*
, for all x∈ (a, b)∖{x^*}, and either g(a)<b or g(b)>a.
When the conditions of <cit.> are satisfied, local stability implies global stability
in the deterministic case. We extend this sharp result to the stochastic case: some version of local condition (<ref>) implies global stability with probability 1. While the main results of the paper refer to global stability of stochastically perturbed maps, there are new findings for deterministic equations with variable control, or for continuous non-smoothf.
In <cit.>, PBC was used to stabilize simultaneously multiple equilibrium points of (<ref>). It was supposed that f(x)-x changes its sign at each K_j, j=0, … j_0, and at each K_2i+1, f satisfies a one-side Lipschitz condition. The control was defined, based on the minimum L of the left and right-side Lipschitz constants, whenever available.
There was a total of j_0≥ 4 equilibrium points K_j, and every second one, K_2i+1, was stabilized. The minimum number of stabilized equilibriums was 2, when j_0=4, so the case of the unique positive equilibrium was not considered in <cit.>.
Compared to <cit.>, the present paper has the following common features: sharp results are achieved by careful computation of the minimal control constant allowing to avoid a two-cycle; also, common tools are used, in particular, the proofs in the case when the control is stochastically perturbed are based on the Borel-Cantelli Lemma.
However, the models are different: roughly speaking, <cit.> is focused on pulse stabilization with PBC
at each 2^kth step, while we focus on the original map and classical PBC. While conditions in <cit.> are not easy to verify, we obtain sharp results for smooth unimodal functions with a negative Schwarzian derivative when global stability can be established by checking easily verifiable local conditions, and our method of finding the best control in deterministic settings is different in the present paper from <cit.>.
The rest of the paper is structured as follows. In Section <ref>, we formulate properties of the noises ξ_n and state two important results applied in the proof of the main theorems: the Kolmogorov's Law of Large Numbers and a corollary of the Borel-Cantelli Lemma.
We also define properties of auxiliary functions that are later used in the paper.
Section <ref> discusses global stability for deterministic equation (<ref>) when control parameter is either constant or variable.
In Section <ref> we establish conditions when local stability implies the global stability when
the control is stochastically perturbed as in (<ref>): in Section <ref> local stability holds for the deterministic value of control while in Section <ref> stochastic local stability was obtained by application of the Kolmogorov's Law of Large Numbers, which is the main result of the paper.
In the case when f is unimodal with a negative Schwarzian derivative <cit.>, in Section <ref> we show that stochastic perturbation of the control can improve the sharp deterministic constant for the average control.
In Section <ref>, we calculate the control parameter which provides global stability based on the left and the right global Lipschtiz constants and generalize this to several intervals with different Lipschitz constants.
Section <ref> contains examples and simulations which illustrate our results.
All proofs are deferred to the Appendix.
§ ASSUMPTIONS AND AUXILIARY STATEMENTS
Denote by
[x] the largest integer not exceeding x,
ℕ_0:=ℕ∪{0}, “s.t” stands for “such that”.
§.§ Assumptions on the noise
Introduce a complete filtered probability space (Ω, ℱ, {ℱ_n}_n ∈ℕ, ℙ), where the filtration (ℱ_n)_n ∈ℕ is naturally generated by
the sequence of independent identically distributed random variables (ξ_n)_n∈ℕ, i.e.
ℱ_n = σ{ξ_1, …, ξ_n}.
The standard abbreviation “a.s.” is used for either “almost sure" or “almost surely"
with respect to the probability measure ℙ, and
“i.i.d.” for “independent identically distributed”, to describe random variables.
For a detailed introduction to stochastic concepts and notations, we refer the reader to <cit.>.
In this paper we consider bounded noises and control perturbations, which is a natural assumption in population dynamics.
(ξ_n)_n∈ℕ is a sequence of i.i.d. random variables such that |ξ_n|≤ 1, ∀ n∈ℕ, and, for each ε>0, ℙ{ξ∈ (1-ε, 1] }>0.
The following lemma was proved in <cit.> and is a corollary of the Borel-Cantelli Lemma.
Let sequence (ξ_n)_n∈ℕ satisfy Assumption <ref>. Then, for each nonrandom S∈ℕ, ε∈ (0, 1) and a random moment ℳ we have
ℙ{ 𝒩>ℳ:ξ_𝒩+i∈ (1-ε, 1), i=0, 1, …, S }=1.
For simulations in the present paper, we consider discrete, as well as continuous random variables ξ_n with a symmetric distribution.
As an example of discrete distribution, we use Bernoulli random variable ξ, which takes the values of 1 and -1 with probability 1/2 each, has the zero mean and the second moment μ_2=1. As an example of continuous random variable we use continuous uniformly distributed on [-1, 1] random variable ξ, which has the mean zero and the second moment μ_2=1/3.
The Kolmogorov Law of Large Numbers is cited below, see Shiryaev <cit.>.
Let (v_n)_n∈ ℕ be a sequence of i.i.d. random
variables with μ:=𝔼 v_n, 𝔼|v_n|<∞, n∈ℕ. Then, a.s., 1/n∑_k=1^n v_k →μ as n →∞.
Under the assumptions of Lemma <ref>, for each ε>0 there exists a random 𝒩(ε) such that
(μ-ε)n≤∑_k=1^n v_k≤ (μ+ε)n, n≥𝒩(ε),
Also, for each γ∈ (0, 1), there exist a nonrandom N= N(γ, ε) and Ω_γ⊂Ω with ℙ(Ω_γ)>1-γ, such that (<ref>) holds on Ω_γ for n≥ N.
§.§ Assumptions on f and properties of auxiliary functions
Let f: [0, ∞)→ [0, ∞) be a continuous function with one positive locally unstable equilibrium K>0, f(x)>x for x∈ (0, K), 0<f(x)<x for x∈ (K, ∞), f(0)=0.
For f satisfying Assumption <ref>, we introduce the values
x_max , f_m:=max{f(x), x∈ [0, K] }=f(x_max),
f^2_m:=inf{f(x), x∈ (K, f_m) }.
The constant f_m is well defined for any continuous f and, since our purpose is to stabilize the unstable equilibrium K, we have 0<f^2_m<K<f_m.
The function f satisfies Assumption <ref>, and for some L_-≥ L_+>1,
f(x)-K≤ L^- (K-x), x∈ (x_m, K), K-f(x)≤ L^+ (x-K), x∈ (K, f_m).
Everywhere in the paper we assume L_-≥ L_+>1.
Assumptions <ref>-<ref> are satisfied for many common population dynamics maps, such as unstable Ricker, logistic and Beverton-Holt models, where L_-≥ L_+>1. Because of this, we concentrate on functions f which are steeper to the left than to the right of K. The case L^+ ≥ L^->1 is similar, so we omit discussing it.
Define
G(β, x):=(1-β)f(x)+β x, β∈ [0, 1), x≥ 0.
The properties of G, which are widely used in this paper, are stated in the next lemma, partially they were
justified in <cit.>.
Let f satisfy Assumption <ref> and G be defined as in (<ref>). Then
(i) G(1,x)=x, G(0,x)=f(x), x∈ [0, ∞), G:[0,1]×ℝ→ℝ is a continuous function.
(ii) f(x)>G(b, x)>G(a, x)>x, if 1>a>b>0 and x<K, while
f(x)<G(b, x)<G(a, x)<x, if 1>a>b>0 and K<x.
(iii) For β∈(β_0, 1)⊂ (0, 1), and β̂:=β-β_0/1-β_0, we have G(β, x)=(1-β̂)G(β_0, x)+β̂x.
(iv) G(β, ·): [f^2_m, f_m] →[f^2_m , f_m] for all β∈ [0, 1).
(v) If x_Gmax is the largest point of maximum of G(β, x) on [0, K], β∈ [0, 1) then x_max≤ x_Gmax.
Set, for β∈ (0, 1), L^± from (<ref>) and L:=max{L^+, L^-},
ℒ^±(β):=(1-β) L^±-β, ℒ(β):=(1-β) L-β.
Define also
Ψ(u,v):=uv-1/(u+1)(v+1)=1-1/u+1-1/v+1, (u,v)∈ (-1, ∞)× (-1, ∞).
The next lemma states some useful properties of functions ℒ^±(·) and Ψ(·, ·).
Let Assumptions <ref>-<ref> hold, and L^->L^+. Using notations from (<ref>) and (<ref>), we have
(i) The functions ℒ^+(β) and ℒ^-(β) are monotone decreasing for β∈ [0, 1].
(ii) ℒ^+(β)< ℒ^-(β) for β∈ [0, 1), ℒ^±(1)=-1, ℒ^±( L^±/L^±+1)=0, ℒ^±( L^±-1/L^±+1)=1.
(iii) ℒ^+(β)∈ [0,1], ℒ^-(β)∈ [1, ∞) when β∈[L^+-1/L^++1, min{L^–1/L^-+1, L^+/L^++1}].
(iv) ℒ^-(β)≤ 1 if β≥L^–1/L^-+1, ℒ^±(β)≤ 0 if β≥L^±/L^±+1.
(v) The function Ψ(·, ·) increases in each argument, does not exceed one and is positive for uv>1.
(vi) The quadratic equation ℒ^+(β)ℒ^-(β)-1=0 in β has a positive discriminant, its smallest solution
β_0=Ψ(L^- , L^+)=
L^- L^+-1/(L^-+1)(L^++1),
while the largest solution is equal to 1. Moreover, for L^- L^+>1, we have β_0∈ (0, 1),
β_0∈[L^+-1/L^++1, min{L^–1/L^-+1, L^+/L^++1}], and β_0=L^+-1/L^++1 if L^+=L^-.
(vii)
The inequality ℒ^+(β)ℒ^-(β)<1 holds when β∈ (β_0, 1). Moreover,
ℒ^+(β)ℒ^-(β)<1 if and only if Ψ(L^- , L^+)<β.
In terms of G, defined as in (<ref>), equations (<ref>) and (<ref>) can be written as
x_n+1=G(α_n, x_n)=(1-α_n)f(x_n)+α_n x, x_0>0, n∈ℕ,
where α_n∈ [0,1] is a variable parameter, which can be deterministic or stochastic in the form α_n=α+ℓξ_n+1.
Next lemma shows that [f^2_m, f_m] is a trap which can be reached after a finite explicitly computed number of steps.
Let Assumption <ref> hold, β_*∈ (0, 1), β^*∈ (β_*, 1) and β_*≤α_n≤β^*. Let x be a solution to (<ref>) with x_0>0. Then there exists a finite number S_0=S_0(β_*, β^*, x_0)∈ℕ such that x_n∈[f^2_m, f_m], for n≥ S_0.
§ GLOBAL STABILITY OF THE DETERMINISTIC EQUATION
In this section we discuss global stability of deterministic equations (<ref>) and (<ref>). Note that global stability of K for α∈( L^-/L^-+1 , 1) was proved in
<cit.>, and generalized to variable α in <cit.>.
The results of this section are based on Lemma <ref> and <cit.>.
Let Assumption <ref> hold, L^->L^+ and G(β, x) be defined as in (<ref>) and
β_*:= inf𝒮, 𝒮:={β∈ (0, 1): G^2(β, x)<x, x∈ (f^2_m, K), G^2(β, x)>x, x∈ (K, f_m)}.
Then 𝒮 is non-empty, and for any α∈ (β_*,1),
any solution x to (<ref>) with x_0>0 satisfies lim_n→∞x_n=K.
Lemma <ref> and the definition of 𝒮 in (<ref>) yield that the parameter β_* is sharp for deterministic equation (<ref>) to guarantee global asymptotic stability. In general, finding the best control β_* is not trivial.
However, stochastic perturbations still can decrease the mean value β_* of the stabilizing control, see
Sections <ref>, <ref> and relevant examples.
Note that definition (<ref>) does not exclude existence of the point x_*∈ [f^2_m, f_m] such that G^2( β_*, x_*)=x_*, so when α=β_* the solution to (<ref>) is not globally asymptotically stable and there is a two-cycle (x_*, G( β_*, x_*)). For each x∈ [f^2_m, K], we have G^2( β_*, x)≥ x, while x∈ [K, f_m] implies G^2( β_*, x)≤ x.
The following theorem is the main result of this section.
Let Assumption <ref> hold, β_* be defined as in (<ref>), β_*∈ (β_*, 1),
β^*∈ (β_*, 1) and α_n∈ [β_*, β^*].
Then
(i) The solution to (<ref>) with any x_0>0 converges to K.
(ii) For any x_0>0 and δ>0 there is a finite number of steps S_1=S_1(x_0, β_*, δ) s.t. x_n∈ (K-δ, K+δ) for n≥ S_1.
The proof of Theorem <ref> is based on Lemma <ref> and Lemma <ref> below.
Let Assumption <ref> hold, β_* be defined as in (<ref>), β_*∈ (β_*, 1), and β^*∈ (β_*, 1). Let x be a solution to (<ref>) with α_n∈ [β_*, β^*].
If x_n<K for some n∈ℕ and s>n then x_s>x_n, while if x_n>K for some n ∈ℕ and s>n then x_s<x_n.
§ GLOBAL STABILITY INDUCED BY NOISE
Now we proceed to the case when the control is stochastically perturbed as in (<ref>) and establish global stability conditions. Let β_* be defined as in (<ref>), so that deterministic equation (<ref>) is globally stable for any α>β_*.
We concentrate on the case when deterministic equation (<ref>) is locally stable for α>α_0, where α_0<β_*.
We show, under some restrictions, that introduction of noise into the control allows to get the stability result for (<ref>) for an intermediate value of the parameter α_0<α <β_* and a corresponding noise level ℓ>0. In other words, stochastically perturbed control α+ℓξ provides both local and global stability, while control with the mean value α does not lead to global stability of K.
The shape of function f guarantees that any solution gets into the interval [f^2_m, f_m] after a finite number of steps S_1, which depends on the initial value x_0. This interval is the first trap: a solution necessarily gets into this interval and stays there forever. If the deterministic control satisfies β>β_*, there is a finite number of steps S_2 after which a solution of (<ref>) gets into the second trap (K-δ_0, K+δ_0), from where it converges to K. Thus, if α+ℓ>β_*, by applying the Borel-Cantelli Lemma <ref>, we conclude that starting from some random moment 𝒩, control α+ℓξ_n remains greater than β_* for at least S_2 number of steps in a row. This allows a solution of (<ref>) to get into the second trap.
Therefore x_n∈ (K-δ_0, K+δ_0) for n>S_1+𝒩+S_2, and lim_n→∞x_n=K.
We have to stress that, in stochastic settings, even though local stability is established with any given probability 1-γ, γ∈ (0, 1), on respected initial values interval, global attractivity holds a.s.
In Section <ref>, we state global stability when local stability is provided by the deterministic control α_0. In this case, the coefficients of the stochastically perturbed control α+ℓξ are easily calculated, independently of the distribution of the noise ξ, see (<ref>) below: the average part α which provides the global control satisfies α > α_0+β_*/2.
By application of Lemma <ref>, in Section <ref> we prove a theorem on global stability when local stability is ensured by a stochastically perturbed control α̅_0. Note that α̅_0≤α_0, where α_0 provides deterministic local stability.
In this case it is not so easy to estimate the average part α of the global control, it just should satisfy local condition (<ref>) and some more restrictions.
In Section <ref>, we consider a unimodal function f with a negative Schwarzian derivative.
For such f in deterministic setting, local stability with a parameter α_0 implies the global one <cit.>. However, with noise, the minimum deterministic constant α_0 providing stability is no longer sharp in the sense that the control α+ℓξ stabilizes for some α<α_0 and ℓ>0, which is illustrated in the cases of Bernoulli (taking the values of ± 1 with the probability of 0.5 each) and continuous uniformly distributed on [-1,1] types of noise.
§.§ Deterministic local stability implies global stability
Let Assumptions <ref>, <ref> hold, the value of β_* be defined as in (<ref>), δ_0∈ (0, K), and α_0∈ (0, 1). Let a solution x to (<ref>) with any α>α_0 and x_0∈ (K-δ_0, K+δ_0) satisfy lim_n→∞ x_n=K, and
α∈(α_0+β_*/2, β_* ), ℓ∈ (β_*-α, min{α-α_0, 1-α}).
Then a solution x to (<ref>) with (α, ℓ) from (<ref>) and any x_0>0 satisfies lim_n→∞ x_n=K a.s.
Note that α_0 can be found as α_0=-f'(K)-1/-f'(K)+1 if f is differentiable at K,
and as α_0=Ψ(L̃^-, L̃^+) if it is not.
The latter statement is confirmed in the next Lemma <ref>.
Let, instead of Assumption <ref>, local Lipschitz-type conditions hold:
f(x)-K≤L̃^-[K-x] x∈ [K-θ, K], K-f(x)≤L̃^+[x-K] x∈ [K, K+θ].
Let conditions (<ref>) hold, Ψ be defined in (<ref>), α>Ψ(L̃^+, L̃^-) and ℒ̃^-(α) be as in (<ref>). Then lim_n→∞x_n=K for each solution x to (<ref>) for any x_0∈ (K-θ/ ℒ̃^-(α), K+θ).
§.§ Stochastic local stability implies global stability
Now we proceed to more elaborate situations, where local stability is provided by the noise perturbations of control, which was proved by the application of the Kolmogorov's Law of Large Numbers, Lemma <ref>.
We follow the ideas of <cit.> and
also <cit.>.
The proof of the main result, Theorem <ref>, consists of two main steps. On the first step, we show that, when (α, ℓ) are chosen appropriately, a solution to (<ref>) changes sides of K at each step. Then we prove modified local stability: for each ν∈ (0, 1) we find a δ_0=δ_0(ν)>0 s.t. as soon as x_t∈ (K-δ_0, K+δ_0), t∈ℕ is arbitrary, we get lim_n→∞ x_n=K on the set which probability is not less than 1-ν.
The second step uses Borel-Cantelli Lemma <ref> and is similar to the one in the proof of
Theorem <ref>.
We assume that (<ref>) holds with some θ>0. In this section we concentrate on f that changes the side of K at each consecutive step, i.e.
(f(x)-K)(K-x)>0 in (K-θ, K+θ), which implies that the solution of equation (<ref>) alternates its position relative to K at each step. To guarantee this, we assume that, for some a_1, a_2>0,
f(x)>a_1(K-x)+K, x∈ (K-θ, K), f(x)<a_2(K-x)+K, x∈ (K, K+θ).
We use constants a_1 and a_2 to impose assumptions on a control parameter to guarantee that the solution of equation (<ref>) also changes its position relative to K at each step.
Instead of assuming α>Ψ(L̃^-, L̃^+), where Ψ was defined in (<ref>), as in the case of deterministic local stability, see Lemma <ref>, we introduce the condition
𝔼ln|ℒ^-(α+ℓξ) ℒ^+(α+ℓξ)| =
-λ_0<0.
Here λ_0>0 is a positive number, and ℒ^±(β)=(1-β) L̃^±-β.
Lemma <ref> below states that conditions (<ref>), (<ref>),
(<ref>) and
α+ℓ≤min{a_1/a_1+1, a_2/a_2+1}
guarantee local stochastic stability with any a priori given probability 1-γ, γ∈ (0,1). However, the smaller γ is, the smaller δ_0 in the local stability interval (K-δ_0, K+δ_0) is required.
Let Assumptions <ref>, <ref> and conditions (<ref>), (<ref>) hold,
and (α, ℓ) satisfy (<ref>) and (<ref>). Then, for each γ∈ (0, 1) and δ_0>0, there exists Ω_γ⊆Ω, ℙ(Ω_γ)≥ 1-γ, s.t. for any solution x to (<ref>) with x_0∈ (K-δ_0, K+δ_0), we have lim_n→∞ x_n=K on Ω_γ.
Since Lemma <ref> is a partial case of Step (ii) in the proof of the Theorem <ref>, its proof is omitted.
Let Assumptions <ref>,<ref> and conditions (<ref>), (<ref>) hold,
and β_* be defined in (<ref>).
Then lim_n→∞ x_n=K a.s., for any solution x to (<ref>) with x_0>0 and (α, ℓ) satisfying (<ref>), (<ref>) and
α∈ (0, β_*), ℓ∈( β_*-α, min{α, 1-α}).
It is straightforward to check that (<ref>) is satisfied if α>β_0=Ψ(L̃^-, L̃^+), ℓ=0. For Bernoulli distributed noises ξ_n, it can be simply shown that there are (α, ℓ), α<β_0 and ℓ>0 s.t. (<ref>) holds.
Indeed, in the case of Bernoulli distributed ξ,
𝔼ln[ℒ^-(α+ℓξ) ℒ^+(α+ℓξ)]=1/4ln(𝒱(α, ℓ)),
𝒱(α, ℓ):=[(L^–α(L^-+1))^2-ℓ^2(L^-+1)^2 ][(L^+-α(L^++1))^2-ℓ^2(L^++1)^2 ].
By Lemma <ref> (vi), ℒ^-(β_0) ℒ^+(β_0)=1, so
𝒱(β_0, 0)=1. Then there exists ℓ>0 s.t. each bracket on the second line of (<ref>) becomes smaller but remains positive, so 𝒱(β_0, ℓ)<1. Now we can choose β_1∈ (0, β_0) s.t. 𝒱(β_1, ℓ)<1. Thus by introducing noise into control, we can allow smaller average control values, see more details for continuously differentiable unimodal f
in Section <ref>.
§.§ Unimodal continuously differential f: when local implies global stability
To ensure equivalence of local and global deterministic stability, we impose additional restrictions on f.
Let f satisfy Assumption <ref>, be unimodal three times differentiable with a unique critical point c∈ (0,K) (maximum), f'(0)>1, f”(x)<0 for all x ∈ (0,c) and a negative Schwarzian derivative (<ref>) everywhere but at c.
If f satisfies Assumption <ref>, local stability of K implies the global one.
Moreover, this is also true for PBC of f <cit.>.
Under Assumption <ref>, the value of the parameter α > α_0 := -f'(K)-1/-f'(K)+1 provides local, as well as global stability of (<ref>), see <cit.>, we can choose β_*=α_0.
If α < α_0, we get d/dx G(α,K)<-1, which means, due to smoothness, that in some neighbourhood of K, we get (x-K)(G(α,x)-K)<0 and |G(α,x)-K| > |x-K|, thus K is repelling solutions in a certain neighbourhood.
In this section we show that, for each f satisfying Assumption <ref>, in the cases of Bernoulli and uniformly distributed noises ξ, we can decrease the mean value to α<α_0, so that the stochastic control α+ℓξ_n with a specially chosen ℓ< min{α, 1-α}, provides global stability of the solution x to (<ref>) with probability one.
By <cit.>, we can take β_*=α_0. We only consider α<-f'(K)-1/-f'(K)+1, which implies ℒ(α)>1 (otherwise, we get stability without noise). Set
L_0:=-f'(K), α_0 := -f'(K)-1/-f'(K)+1.
Further, we use the expression
𝔼ln[(1-α-ℓξ) L_0-α-ℓξ] which should be negative for local stability.
If ℓ=0, we get 0<(1-α)L_0-α < 1, or α∈ (α_0,1),
where α_0 is from (<ref>), which is the well-known sharp stability condition in the deterministic case <cit.>.
Let Assumptions <ref>, <ref> hold, L_0 and α_0 be defined as in (<ref>). Assume that, for some α∈ (0,α_0), ℓ∈( α_0-α, min{α, 1-α}), λ_1>0, either
α+ℓ< L_0/L_0+1, 𝔼ln[(1-α-ℓξ)L_0 -α-ℓξ]=-λ_1<0
or
𝔼max{ln[(1-α-ℓξ)L_0], ln[α+ℓξ]}=-λ_1<0
holds. Then lim_n→∞ x_n=K a.s., for any solution x to (<ref>) with x_0>0.
The next theorem demonstrates that when f satisfies Assumption <ref> and ξ has a symmetric distribution, we can decrease the average value of the control, compared to the minimal deterministic one α_0.
Let f satisfy Assumption <ref>, L_0 and α_0 be defined as in (<ref>),
Assumption <ref> hold, random variables ξ_n have a symmetric (around zero) distribution, and be either continuous or discrete with a countable number of states. Then there exist α∈ (0,α_0) and ℓ∈( α_0-α, min{α, 1-α}) s.t. α_0<α+ℓ<L_0/L_0+1 and (<ref>) holds.
Note that the values of α<α_0 and ℓ established in Theorem <ref> are not supposed to be optimal (for example, the minimal α), we just show that they exist for either continuous or discrete distribution of ξ. For each particular distribution we can find smaller values of α by calculating 𝔼ln [ℒ_0(α+ℓξ)]=𝔼ln [L_0-(α+ℓξ)(L_0+1)].
When ξ has a Bernoulli distribution, we get 𝔼ln [ℒ_0(α+ℓξ)]=1/2ln [[L_0-α(L_0+1)]^2-ℓ^2(L_0+1)^2 ], so (<ref>) holds when
[L_0/L_0+1 -α]^2-1/(L_0+1)^2 <ℓ^2<min{[L_0/L_0+1 -α]^2, α}.
For example, for L_0>2, the values α=L_0-1-L_0-2/2/L_0+1, ℓ=L_0-2/2+ 0.5/L_0+1 satisfy α∈ (0,α_0) and (<ref>), a similar example can be found for L_0∈ (1,2].
When ξ has the uniform continuous on [-1,1] distribution,
we obtain 𝔼ln [ℒ_0(α+ℓξ)]=1/2∫_-1^1ln [ℒ_0(α+ℓ u)]du, so (<ref>) holds when
ln [ℒ_0(α)]<1/6[ℓ(L_0+1)/ℒ_0(α)]^2.
Using estimation of the integral, example pairs (α,ℓ) can be found with α∈ (0,α_0) for which
(<ref>) is valid.
Example <ref> considers the Ricker model with a control perturbed by the Bernoulli
or the continuous noise.
The simulation results for the Bernoulli perturbations illustrate that parameters computed using (<ref>) are quite sharp.
Similar calculations can be implemented for the uniform distribution illustrating sharpness of (<ref>).
§ DETERMINING CONTROL OF THE DETERMINISTIC EQUATION
In this section we discuss situations when we are able to find the parameter β_* which guarantees global stability of the solution to deterministic equation (<ref>). In all results β_* might be not optimal (the Ricker model demonstrates this, see Example <ref>) even though it is much better than controls found in <cit.> based on the global constants.
We are going to use the method of envelope functions suggested by Cull <cit.>.
The next result from <cit.> is slightly adapted to our needs.
Let ϕ(x) be a monotone decreasing function which is positive on (d_1, d_2) and ϕ(ϕ(x))=x, K ∈ (d_1,d_2).
Assume that f(x) is a continuous function s.t. ϕ(x)>f(x) for x∈ (d_1, K),
ϕ(x)<f(x) for x∈ (K, d_2), f(x)<x for x>K, f(x)>x for x∈ (d_1, K),
f(x)>0 on (d_1, d_2). Then, for all x∈ (d_1, d_2), lim_k→∞f^(k)(x)=K.
When ϕ is such as in Lemma <ref>, we say that ϕ envelopes f, see <cit.>.
We start with the case when the function f satisfies only Assumption <ref>.
Let Assumption <ref> hold, 1<L^+≤ L^-, β_0 be defined as in (<ref>). Then lim_n→∞x_n=K for each solution x to (<ref>) with α∈ (β_0, 1) and x_0>0.
Now we generalize Proposition <ref> to the case when the interval [x_m, f_m] is split into several subintervals and f satisfies a Lipschitz condition with different constants on each of them. In many population dynamics models, such as Ricker's and logistic, using just the left-hand-side L^- and the right-hand-side L^+ Lipschitz type constants does not give the best possible value for the control parameter. Models like Ricker's have the property that to the left of K the Lipschitz-type constants are much larger than the derivative at K, while to the right of K they are quite small. In the following we are going to use such situation and find a better low bound for the control than β_0.
First, we construct a piecewise function which can be used as ϕ in Lemma <ref>
for corresponding function G(α, x). Consider finite sequences (a_i)_i=0, 1, …, m, (C^-_i)_i=0,…, m-1, (C^+_i)_i=0, …, m-1, (b_i)_i=0, …, m-1,
0<a_m<… a_2<a_1<a_0 := K,
C_i^-, C_i^+>0, C_i^-C_i^+=1,
b_i+1=-C_i^-(a_i+1-a_i)+b_i, i=0, …, m-1, b_0 := K,
and a piecewise function ϕ:(0, ∞)→ (0, ∞), ϕ(K)=K, is defined for i=0,… m-1 by
ϕ(x)=-C_0^-(x-K)+K, a_1≤ x≤ K, ϕ(x)=-C_0^+(x-K)+K, K≤ x≤ b_1,
ϕ(x)=-C_1^-(x-a_1)+b_1, a_2≤ x < a_1, ϕ(x)=-C_1^+(x-b_1)+a_1, b_1 < x≤ b_2,
ϕ(x)=-C_i^-(x-a_i)+b_i, a_i+1≤ x < a_i, ϕ(x)=-C_i^+(x-b_i)+a_i, b_i <x≤ b_i+1.
For each i=0, …, m-1, we have b_i+1=ϕ(a_i+1). Also, when a_i+1≤ x≤ a_i,
ϕ(ϕ(x))=-C_i^+(C_i^-(x-a_i)+b_i-b_i)+a_i=x,
and, when b_i≤ x≤ b_i+1,
ϕ(ϕ(x))=-C_i^-(C_i^+(x-b_i)+a_i-a_i)+b_i=x.
So ϕ is a monotone decreasing function which is positive on (0, b_m), since ϕ(b_m)=a_m and ϕ(ϕ(x))=x.
Now we proceed to equation (<ref>). Let (<ref>) hold, L_i^-, L_i^+>0, i=0, …, m-1, L_0^-L_0^+>1,
f(x)-K≤ L_0^-(K-x), a_1≤ x≤ K, K-f(x)≤ L_0^+(K-x), x>K,
α_0 =Ψ(L_0^-, L_0^+ ),
and, for i=1, …, m, ℒ^±_i(α):=L_i^±-α(L_i^±+1),
b_i(α_0)=ℒ_i-1^-(α_0)(a_i-a_i-1)+b_i-1(α_0), b_0:=K,
f(x)-f(a_i)≤ L_i^-(a_i-x), a_i+1≤ x≤ a_i, f(b_i(α_0))-f(x)≤ L_i^+(x-b_i(α_0)), b_i(α_0)≤ x.
We assume L_0^-L_0^+>1 since otherwise the equilibrium K is locally stable. We also assume without loss of generality that
ℒ_i-1^-(α_0)>0 for all i=1, …, m. Indeed, if, for some i, we have ℒ_i-1^-(α_0)<0, it means that b_i(α_0)<b_i-1(α_0), so we exclude a_i from the sequence in (<ref>).
Let Assumption <ref> and conditions (<ref>), (<ref>)-(<ref>) hold and also
α_0 ≥max_i=1,…, m{L_i^-L_i^+-1/(L_i^-+1)(L_i^++1)}.
If x_n is a solution to (<ref>) with any α >α_0 and x_0>0 then lim_n→∞x_n=K.
Proposition <ref> is illustrated in Example <ref> (a).
By Lemma <ref>, conditions (<ref>)-(<ref>) provide stability of the solution to (<ref>) with α >α_0 on the interval (a_1, ℒ_0^-(α_0)(a_1-K)+K). Therefore, Proposition <ref> proves that when conditions (<ref>)-(<ref>) hold, the local stability implies the global one.
We also can deal with the case when there is i_0<m s.t. α_0<Ψ(L_i_0^-, L_i_0^+ ) and find a bigger parameter α̅ which guarantees global stability of the solution to (<ref>). It is discussed in Remark <ref> in the Appendix and illustrated in Example <ref> (b).
When f is continuously differentiable, we can get an explicit result computing α_0 which ensures global stability.
Let f satisfy Assumption <ref>, be continuously differentiable with f'(x)<1 for x∈ [x_max, f_m], and for α_0 defined in (<ref>),
α_0>Ψ(f'(G(α_0, x)), f'(x) ) x∈ [x_max, K).
Note that since Assumption <ref> holds, there is no local stability at K, and therefore f'(K)<-1,
so α_0 is well defined. Also, any α∈ (α_0, 1) leads to local stability of K for (<ref>).
Let Assumption <ref> hold, and α∈ (α_0, 1). Then lim_n→∞x_n=K, where x is a solution to (<ref>) with x_0∈ (0, ∞).
To illustrate Proposition <ref>, we consider the bobwhite quail map <cit.>
g(x)=x(0.55+3.45/1+x^9), x>0,
where K ≈ 1.2347, f'(K) ≈ 2.521, α̃≈1.521/3.521≈ 0.4319, x_max≈ 0.811, Ψ(a,b)=ab-1/(a+1)(b+1).
Fig. <ref> shows that α̃≥max_x∈ [x_max, K]Ψ(g'(x), g'(G( α̃, x)) ) = max_x∈ [x_max, K] envel(x).
§ EXAMPLES AND SIMULATIONS
Consider Ricker's function f(x)=xe^r(1-x), x≥ 0, which satisfies
β_*=α_0=-f'(1)-1/-f'(1)+1.
(a) Let r=3.5, then -L_0=f'(1)=1-r=-2.5, α_0=β_*=3/7 ≈ 0.4285. In the case of Bernoulli distributed ξ
we apply formula (<ref>) from Remark <ref> and get that ℓ should satisfy √((0.71-α)^2-0.0811)<ℓ<min{(0.71-α), α}, see the domain in Fig. <ref>. Taking α=0.368 we should have ℓ∈ (0.1877, 0.342).
This case is illustrated by the bifurcation diagram in Fig. <ref>, left. The runs for ℓ =0.2 and α = 0.3,0.36,0.37 in Fig. <ref> also confirm this.
For uniformly distributed ξ we apply inequality (<ref>) from Remark <ref>, which is satisfied when α=0.405 and ℓ=0.2, which coincides with what we observe on the bifurcation diagram in Fig. <ref>, right.
(b) Let now r=3, then -L_0=f'(1)=1-r=-2, α_0=1/3=0.33(3). For Bernoulli distributed ξ,
the domain for (α,ℓ) based on condition (<ref>) from Remark <ref>, is described in Fig. <ref>. We conclude that for α>0.283, ℓ=0.2, the equilibrium K is globally stable.
The bifurcation diagram for ℓ = 0.2 on Fig. <ref> confirms it.
Simulations illustrate sharpness of our theoretical computations: the final bifurcation leading to stability in Fig. <ref> is quite close to theoretically computed α=0.283.
Applying Figs. <ref> and <ref>, or calculating directly by formulas (<ref>), (<ref>), we can find smaller average stabilizing value α of the stochastic control, but it might involve larger noise intensity ℓ.
(a) Consider a piece-wise linear function with a unique positive fixed point K=32
f(x):={[ 51x/28 , x∈ [0, 28),; -6x+219, x∈ [28, 29)=[a_3, a_2),; -5x+190, x∈ [29, 31)=[a_2, a_1),; -3x+128, x∈ [31, 32)=[a_1, K),; -2x+96, x∈ [32, 33) =[K, b_1),; -1.4x+76.2, x∈ [33, 38)=[b_1, b_2),; -1.2x+68.6, x∈ [38, 50)=[b_2, b_3),; 8.6, x≥ 50. ].
For function f defined by (<ref>), Assumptions <ref> and conditions (<ref>),(<ref>)-(<ref>) hold with K=32, x_max=28, f_m=51,
a_1=31, a_2= 29, a_3= 28=x_max, b_1=33, b_2= 38, b_3= 39,
L^-_0=3, L_0^+=2, α_0=Ψ(2,3)≈ 0.416(6),
L^-_1=5, L_1^+=1.4, Ψ(5, 1.4)≈ 0.416(6), L^-_2=6, L_2^+=1.2, Ψ(6, 1.2)≈ 0.4025.
Since α_0=Ψ(2,3)≥max{Ψ(5, 1.4), Ψ(6, 1.2)}, condition (<ref>) holds and we can apply Proposition <ref> and conclude that lim_n→∞x_n=K for any α >α_0 and x_0>0. If, for calculating global stability control, we use maximum of left and right Lipschitz type constants with respect to the equilibrium K, L^-=51-32/32-28=4.75 and L^+=2 we arrive at
Ψ(4.75, 2)=0.49275>0.4166=α_0, which shows the advantage of Proposition <ref> comparing to Proposition <ref>.
Now we perturb the control with the noise ℓξ, where ξ has a Bernoulli distribution, and apply Theorem <ref>, which allows to decrease the control average α. Based on the above, we need only to show that α+ℓ>α_0, and check condition (<ref>) with
ℒ^-(α+ℓξ)=L^-_0-(α+ℓξ)(L^-_0+1)=3-4(α+ℓξ),
ℒ^+(α+ℓξ)=L^-_0-(α+ℓξ)(L^-_0+1)=2-3(α+ℓξ). Condition (<ref>) holds if
𝒱(α, ℓ):=[(L^-_1-α(L^-_1+1))^2-ℓ^2(L^-_1+1)^2 ][(L^+_1-α(L^+_1+1))^2-ℓ^2(L^+_1+1)^2 ]
=
[(3-4α)^2-16ℓ^2 ][(2-3α)^2-9ℓ^2] <1.
By direct calculations we show that, when α=0.36, ℓ=0.2 we have α+ℓ=0.56>0.417>α_0, and 𝒱(0.36, 0.2)≈ 0.8724<1.
Fig. <ref> (left) illustrates that non-controlled map (<ref>) is chaotic,
as the theory predicts, there is convergence to the equilibrium without noise for α=0.417 (second), and, while for the value of control α=0.35 we get a stable two-cycle (third), addition of the Bernoulli noise with ℓ=0.2 leads to stabilization (right). Note that for ℓ=0.2, the control value α=0.35<0.36 which is theoretically predicted above.
(b) Consider a continuous function with a unique positive fixed point K=32
f_2(x):={[ 50x/28 , x∈ [0, 28),; -5x+190, x∈ [28, 31),; -3x+128, x∈ [31, 32),; -2x+96, x∈ [32, 33),; -1.7x+86.1, x∈ [33, 45),; 9.6, x≥ 45. ].
We have the same α_0=5/12≈ 0.416 as in (a), however α_0<Ψ(L_2^+, L_2^-) = Ψ(5, 1.7) = 0.4629. Note that α_0 does not provide global stability: we apply Lemma <ref>, and for x=28 we get G^2(α_0, 28)≈ 26.7455<28.
Fig. <ref> (left) illustrates multi-stability and chaotic features of map (<ref>), the control α = 0.417 providing local stability still leads to a two-cycle (middle), while the value of α = 0.463 (right) leads to global stability. Fig. <ref> illustrates that, generally, local stabilization does not make the equilibrium a global attractor, since
there may also be a stable two-cycle, see also <cit.>.
Consider the function
f(x)={[ π+2/π-2 x, x∈ (0, 1-2/π],; (1-x)(1.5+0.5sin1/x-1)+1, x∈ (1-2/π, 1),; 1, x=1,; (1-x)(1.25+0.25sin1/x-1)+1, x∈ (1, 1+2/π),; 1-3/π, x≥ 1+2/π. ].
which is not differentiable at the unique positive equilibrium 1, and there is no monotonicity in any neighbourhood of 1. For its graph see Fig. <ref>.
We have L^-=2, L^+=1.5, so, by Proposition <ref>, α_0=β_0=L̃^-L̃^+-1/(L̃^-+1)(L̃^-+1)=2/3× 2.5≈ 0.266.
In condition (<ref>) we have a_1= a_2=1, so in order to apply Theorem <ref>,
we need to have
α+ℓ<0.5, α+ℓ> β_0=0.266, α<β_0=0.266,
and for the Bernoulli noises, it should be
𝒱(α, ℓ) := [(L^–α(L^-+1))^2-ℓ^2(L^-+1)^2 ][(L^+-α(L^++1))^2-ℓ^2(L^++1)^2 ]
= [(2-3α)^2-9ℓ^2 ][(1.5-2.5α)^2-6.25ℓ^2]<1.
We can check by straightforward calculations that the values
( α, ℓ)=(0.23, 0.2) and (0.22, 0.19) satisfy the above conditions.
Thus, introduction of noise decreases the average of the control.
While the bifurcation diagram in Fig. <ref> (left) illustrates chaotic behaviour for α>0.2, for ℓ=0.05 stabilization is observed for smaller α > 0.142 in the case of the Bernoulli noise, see Fig. <ref> (middle) and for α > 0.142
in the case of the uniform continuous noise in Fig. <ref> (right).
These control values are lower than theoretically predicted.
Possible reasons for this phenomenon are discussed in Section <ref>.
§ CONCLUSIONS AND DISCUSSIONS
Discrete maps are a handy way to describe abundance of semelparous populations. Though simple one-dimensional maps, such as Ricker, for larger values of the parameter can exhibit chaotic behavior, which is not frequently observed in nature. It is sometimes referred to a stabilizing influence of random perturbations, in particular, associated with an environmental noise. In the current paper, we considered control with a prescribed average and randomness. Can this average be reduced by incorporating noise, and what are the conditions on this average, and admissible noise amplitudes? We give, generally, a positive answer to this question, establishing relevant estimates. This is coherent with experimental observations that introducing noise can either stabilize population size or at least reduce its variation.
The main results of the present paper can be summarized as follows:
* A strict bound for the control parameter leading to global stability is outlined
in Theorem <ref>, and a smaller average control is allowed in the stochastic settings, provided that α+ ℓ is above the critical deterministic bound, see Theorem <ref>.
* In the case of a unimodal map, we extend the results of <cit.> to the stochastic case
in Theorem <ref>, which is quite a challenging task, taking into account local in general character of convergence in the stochastic case.
To the best of our knowledge, this is the first result of this type. In addition to the general statement, for symmetric distributions, we justify that the average control level ensuring stability is lower in the presence of noise (Theorem <ref>), qualitatively confirming the stabilizing effect of noise.
* In the deterministic case, some improvement for global stabilization can be achieved when a series of local Lipschitz constants is taken into account (Proposition <ref>) in the sense that the control intensity α can be smaller.
The results are verified with numerical simulations illustrating sharpness of the constants when local and global stability are equivalent, and the fact that in the case of sufficient conditions, the required control in examples can be lower than theoretically predicted.
For instance, in Example <ref> the bifurcation diagram in Fig. <ref> corresponding to the noise perturbed control demonstrates that stabilization is achieved for lower than theoretically predicted average values α of the control.
Note that when there is no noise, the chaotic behavior stops exactly as predicted theoretically, at α_0 ≈ 0.266. However, as soon as noise is present, the necessary average control value is dropped significantly to ≈ 0.142. We suggest that there could be several reasons for that. One of them is concerned with the fact that when the computer does simulations, it assumed that the function f in (<ref>) is equal to 1 in some small, but still significant
for stochastic stability neighbourhood of 1, so it simulates a slightly different function. Therefore, in this case, this is local stability with α=0 for the corresponding equation, and for calculation of parameters for the global stability we can apply Theorem <ref>, which gives us α>0.133 and ℓ>0.266-α. Taking α=0.142 gives ℓ>0.124. However, the simulation demonstrates that global stability is achieved for smaller value ℓ=0.05. We conjecture that the reason for that is an oscillating nature of the function (<ref>) and the noise lingering at the intervals where the function takes smaller values.
The present paper leads to several open questions and lines of research:
* For specific types of symmetric distributions, construct sharp stabilization criteria from Theorem <ref> and also deduce easily verifiable sufficient conditions. How does the situation change for non-symmetric distributions with the zero mean?
In addition, can the method and the results be generalized to the case of unbounded, for example, normal distributions?
* The deterministic results are significantly based on some monotonicity properties of f in some neighbourhood of the unique positive equilibrium. Can the results be adapted to a strongly oscillatory case?
Example <ref> sheds some light on the possibility to extend the results of the present paper to the case when f is oscillatory near K.
* Everywhere in the current paper we considered the multiplicative noise in the control term. However, it is also interesting as to study additive `environmental' noise
x_n+1=(1-α)f(x_n)+α x_n b+ ℓξ_n+1, x_0>0, α∈ [0,1),
for which only a blurred equilibrium can be stabilized in some sense.
§ ACKNOWLEDGMENT
The first author acknowledges the support of NSERC, the grant RGPIN-2020-03934.
The authors are grateful to the anonymous referees for their valuable comments.
99
BR0
G. Berkolaiko and A. Rodkina,
Asymptotic behavior of solutions to linear discrete stochastic equation, in
Proceedings of the International Conference “2004-Dynamical Systems and Applications",
Antalya, Turkey, 5-10 July 2004, 614–623.
BR1
G. Berkolaiko and A. Rodkina,
Almost sure convergence of solutions to non-homogeneous stochastic difference equation,
J. Difference Equ. Appl. 12 (2006),
535–553.
BKR2016
E. Braverman, C. Kelly and A. Rodkina,
Stabilization of difference equations with noisy prediction-based control,
Physica D, 326 (2016), 21–31.
BKR2020
E. Braverman, C. Kelly and A. Rodkina,
Stabilization of cycles with stochastic
prediction-based and target-oriented control,
Chaos 30 (2020), 15pp.
BL2012
E. Braverman and E. Liz,
On stabilization of equilibria using predictive control with and
without pulses,
Comput. Math. Appl. 64 (2012),
2192–2201.
BRAllee
E. Braverman and A. Rodkina,
Stochastic difference equations with the Allee effect,
Discrete Contin. Dyn. Syst. Ser. A 36(11) (2016), 5929–5949.
BR2019
E. Braverman and A. Rodkina,
Stochastic control stabilizing unstable or chaotic maps,
J. Difference Equ. Appl. 25 (2019), 151–178.
BR2022
E. Braverman and A. Rodkina,
Stabilizing multiple equilibria and cycles with noisy prediction-based control,
Discrete Contin. Dyn. Syst. Ser. B 27
(2022), 5419–5446.
Chagas
T. P. Chagas, P.-A. Bliman, and K. H. Kienitz,
Stabilization of periodic orbits of discrete-time dynamical systems using the prediction-based control: new control law and practical aspects, J. Franklin Inst. 355 (2018),
4771–4793.
Elaydi
S. Elaydi, An Introduction to Difference Equations,
Undergraduate Texts in Mathematics, Springer, New York, 2005.
ElaydiSacker
S. Elaydi and R. Sacker,
Basin of attraction of periodic orbits of maps on real line,
J. Difference Equ. Appl. 10
(2004), 881–888.
Coppel
W. A. Coppel, The solution of equations by
iteration, Proc. Cambridge Philos. Soc. 51 (1955), 41–43.
Gull
P. Cull,
Global stability of population models,
Bull. Math. Biol. 43 (1981), 47–58.
Cull2
P. Cull,
Population models: stability in one dimension,
Bull. Math. Biol. 69 (2007), 989–1017.
Kh
R. Has’minski, Stability of Systems of Differential Equations
under Random Perturbations of Their Parameters, Nauka,
Moscow, 1969, 367 pp.
Medv
P. Hitczenko and G. Medvedev,
Stability of equilibria of randomly perturbed maps,
Discrete Contin. Dyn. Syst. Ser. B 22(2) (2017), 269–281.
K
H. Kesten, Random difference equations and renewal theory for the
product of random matrices, Acta Math. 131 (1973), 207–248.
FL2010
E. Liz and D. Franco,
Global stabilization of fixed points using predictive control,
Chaos 20 (2010), 023124, 9 pages.
mb
J. G. Milton, J. Bélair,
Chaos, noise, and extinction in models of population growth,
Theor. Popul. Biol. 37 (1990) 273–290.
polyak
B. T. Polyak, Chaos stabilization by predictive control,
Autom. Remote Control 66 (2005), 1791-1804.
Rich
S. Rich, A. Hutt, F. K. Skinner, T. A. Valiante, and J. Lefebvre,
Neurostimulation stabilizes spiking neural networks by disrupting seizure-like oscillatory transitions, Sci Rep 10 (2020), 15408.
https://doi.org/10.1038/s41598-020-72335-6
Shark
A. N. Sharkovsky, Yu. L. Maistrenko and E. Yu. Romanenko,
Difference Equations and Their Applications, Kluwer Academic, Dordrecht, 1993.
Shiryaev96
A. N. Shiryaev.
Probability. 2nd edition,
Springer, Berlin, 1996.
Singer
D. Singer,
Stable orbits and bifurcation of maps of the interval,
SIAM J. Appl. Math. 35(2) (1978), 260–267.
sousa
M. de Sousa Vieira and A. J. Lichtenberg, Controlling chaos using nonlinear feedback with delay,
Phys. Rev. E 54 (1996), 1200–1207.
uy99
T. Ushio and S. Yamamoto,
Prediction-based control of chaos,
Phys. Lett. A 264 (1999), 30–35.
§ APENDIX
§.§ Proof of Lemma <ref>
The statements of Parts (i)-(v) are straightforward. By (ii), ℒ^+(1)ℒ^-(1)=1. Part (v) implies
ℒ^+(β_0)ℒ^-(β_0) = [L^+ - (L^+ +1) ( 1- 1/1+ L^+
- 1/1+ L^-) ] [L^- - (L^- +1) ( 1- 1/1+ L^+
- 1/1+ L^-) ] =1,
so (vi) holds. The quadratic polynomial ℒ^+(β)ℒ^-(β) has a positive coefficient of β^2 and two roots: one and β_0<1. Thus,
ℒ^+(β)ℒ^-(β)<1 holds when β∈ (β_0, 1), which concludes the proof of Part (vii).
§.§ Proof of Lemma <ref>
If x_0∈ [f^2_m, f_m] the result follows from Lemma <ref> (iv) with S_0=1.
Assume that x_0∈ (0, f^2_m) and
let Δ_1:=inf{f(x)-x : x∈ [x_0, f^2_m) }>0 as f^2_m<K,
G(α_n, x)-x=(1-α_n)(f(x)-x), for each α_n∈ (β_*, β^*) and 1-α_n>1-β^*. Then
G(α_n, x)-x>(1-β^*)Δ_1, x∈ [x_0, f^2_m], N^-:=[f^2_m-x_0/Δ_1(1-β^*)] +1.
Reasoning inductively and assuming that x_i=G(β_i, x_i-1)∈ (x_i-1, f^2_m], i=1, …, k, k≤ N^-, we get
x_k+1=G(α_k+1, x_k)-x_k+x_k≥ G(β^*, x_k)-x_k+x_k≥Δ_1+x_k≥…≥ kΔ_1+x_0.
So after at most N^- steps the solution reaches [f^2_m, f_m]. Recall that by definition of f_m the solution cannot jump over f_m.
In the case x_0>f_m we find N^+ in a similar way and let S_0:=max{N^-, N^+}.
§.§ Proof of Lemma <ref>
Note that it is enough to consider only x∈ [f_m^2, f_m].
Fix some α>L^-/L^-+1, then, by Lemma <ref> (i),(iv) we have ℒ^+(α)<ℒ^-(α)<0, and
G(α, x)-K=(1-α)(f(x)-K)+α(x-K)<[(1-α)L^–α](K-x)=ℒ^-(α)(K-x)<0,
x∈ (f^2_m, K),
K-G(α, x)<[(1-α)L^+-α](x-K)=ℒ^+(α)(K-x)<0, x∈ (K, f_m).
This implies x<G(α, x)<G^2(α, x) for x∈ (f^2_m, K) and x>G(α, x)>G^2(α, x) for x∈ (K, f_m), so α∈𝒮.
Further, we notice that, since 𝒮≠∅,
for each α>β_*= inf𝒮 there is β_1∈𝒮 s.t. α>β_1, which, by Lemma <ref>(ii), implies that G(α, x)>G(β_1, x) for x>K and G(α, x)<G(β_1, x) for x<K. If x∈ (f^2_m, K) and G(α, x)>K, there is x̂∈ (x, K), s.t. G(α, x)=G(β_1, x̂). Since β_1∈𝒮 we have G^2(α, x)=G(α, G(β_1, x̂))>G^2(β_1, x̂)>x̂>x, which, by Lemma <ref>, yields the result. Other cases are either similar or have been considered above.
Note that the proof of the second part of Lemma <ref> follows the scheme of <cit.>, even though we do not impose any monotonicity restriction on f.
§.§ Proof of Lemma <ref>
Once x_n-K does not change the sign, or if it changes the sign once, it is true. Assume x_n<K, the case x_n>K is similar.
For the proof it is enough to show that when a solution moves to the right of K and then to the left of K, at the moment of the first return to (0, K) it will be on the right of x_n.
Let s_1=min{s>n: x_s>K}, s_2=min{s>s_1: x_s<K}, and both sets be not-empty. Let s_1>n+1, s_2>s_1+1. Then,
x_n<x_n+1≤ x_s_1-1<K<x_s_1, x_s_1>x_s_1+1≥ x_s_2-1>K>x_s_2.
Since x_s_1>x_s_2-1>K>x_s_1-1 and α_s_2>β_*, we have x_s_1=G(α_s_1, x_s_1-1)≤ G(β_*, x_s_1-1). Due to the continuity of G(β_*, ·),
there is x̅∈ (x_s_1-1, K) s.t. G(β_*, x̅)=x_s_2-1. Since β_*> β_* we have G^2(β_*, x̅) >x̅ and since x_s_2-1>K we have G(α_s_2, x_s_2-1)≥ G(β_*, x_s_2-1). Then
x_s_2=G(α_s_2, x_s_2-1)≥ G(β_*, x_s_2-1)=G(β_*, G(β_*, x̅))=G^2(β_*, x̅) >x̅>x_s_1-1>x_n,
so x_s_2>x_n, i.e. the first return of the solution to (0, K) is to the right of x_n.
If s_1=n+1, s_2>s_1+1, we have x_n<K<x_n+1, and we can start the reasoning as in the first case for x_n+1 instead of x_n. If s_1=n+1, s_2=s_1+1, we have x_n<K<x_n+1, x_n+2<K<x_n+1. So x_n+1=G(α_n+1, x_n)≤ G(β_*, x_n) and for some x̅∈ (x_n, K) we have G(β_*, x̅)=x_n+1. Thus
x_n+2= G(α_n+2, x_n+1)≥ G(β^*, x_n+1)=G(β^*,G(β_*, x̅))> x̅ >x_n,
and again the first return of solution to (0, K) is to the right of x_n.
§.§ Proof of Theorem <ref>
(i) Consider first x_0∈ [f^2_m, f_m]. Assume the contrary that a solution does not converges to K.
By Assumption <ref>, there is an infinite number of points x_n on both sides of K, (x_n)_n∈ℕ_0=(x_n_i)_i∈ℕ_0∪ (x_n_j)_j∈ℕ_0, x_n_i∈ (0,K), x_n_j∈ (K, ∞).
Lemma <ref> implies that a solution
to (<ref>) is a union of two monotone subsequences: the first one is in (0,K) and increasing, the second one is in (K, ∞) and decreasing.
If (x_n)_n∈𝐍 does not converge, there are non-negative a and b and N̅∈ℕ, s.t.
x_n∉ [K-a, K+b] for all n≥N̅. If we assume that one of a,b is equal to zero, we get that one of the sequences, say x_n_i, converges to K, then from continuity of f, the other sequence of f^m(x_n_i)
also converges to K.
Define
Δ_*:=min{inf_u∈ [f^2_m, K-a]∪ [K+b, f_m]| G(β_*, u)-u |, inf_u∈ [f^2_m, K-b/L^-]∪ [K+a/L^-, f_m]| G^2(β_*, u)-u | },
and note that Δ_*>0 by Assumptions <ref>, Lemma <ref>, definition (<ref>), and by the choice of β_*.
Let x_i∈ [f^2_m, K-a] for some i> N̅.
Applying Lemma <ref> (iii), we get
x_i+1-x_i = G(α_i+1, x_i)-x_i =(1-α_i+1-β_*/1-β_*)G(β_0, x_i)+α_i+1-β_*/1-β_* x_i-x_i
=
1-α_i+1/1-β_*[G(β_*, x_i)-x_i]>1-β^*/1-β_*Δ_*=Δ_*.
If in addition, G(α_i+1,x_i)>K+b, there is x̅_i∈ (x_i, K) s.t. G(α_i+1,x_i)=G(β_*, x̅_i). By Assumption <ref> we have
G(β_*, x̅_i)≤ K+L^-(K-x̅_i), so K+b< G(α_i+1,x_i)=G(β_*, x̅_i)≤ K+L^-(K-x̅_i),
which implies
b<L^-(K-x̅_i), K-x̅_i>b/L^-, x̅_i∈(x_i, K-b/L^-)⊂(f^2_m, K-b/L^-).
Recalling that G(α_i, x)>G(β_*, x) for x>K, we get
G(α_i+1, G(α_i, x_i))>G(β_*, G(α_i, x_i))=G(β_*, G(β_*,x̅_i))=G^2(β_*,x̅_i)>x̅_i>x_i,
which, by (<ref>), implies
G(α_i+1, G(α_i, x_i))-x_i>G^2(β_*,x̅_i)-x̅_i>Δ_*. So, if the solution changes the side of K at two successive steps, returning to [f^2_m, K-a], we have x_i+2>x_i+Δ^*. Following the same argument as in the proof of Lemma <ref>, we actually can get x_i+m>x_i+Δ^*, where i+m is the first moment after returning to [f^2_m, K-a].
Therefore the solution x_i, which starts in [f^2, K-a], moves right with the step of the length bounded below by Δ_*. If it remains on the left of K, it eventually moves to the right of K-a in a finite number of steps, which contradicts to our assumption. If, at some moment it jumps over K+b and remains to the right of K, it eventually moves below K+b in a finite number of steps, which contradicts to our assumption again. If it returns to [f^2, K-a], its new position there will be at least Δ_* to the right than before the jump. Summarizing all the above, we conclude that the solution cannot move in [f^2, K-a] for more than N_1:=K-a-f^2_m/Δ_* steps, cannot stay in [K-b, f_m] for more than N_2:=f_m-K-b/Δ_* steps, and cannot remain in [f^2_m, K-a]∪ [K+b, f_m] for more than N_1+N_2+f_m+a-f^2_m-b/Δ_* steps.
Similarly, when x_i∈ [K+b, f_m] we get x_i-x_i+1>Δ_*. If in addition, G(α_i+1,x_i)<K-a, there is x̅_i∈ (K, x_i) s.t. G(α_i+1,x_i)=G(β_*, x̅_i). By Assumption <ref>, we get
K-a> G(α_i+1,x_i)=G(β_*, x̅_i)≥ K-L^+(x̅_i-K), so x̅_i∈( K+a/L^+, x_i)⊂(K+a/L^+, f_m ), and
x_i- G(α_i+1, G(α_i, x_i))>x̅_i-G^2(β_*,x̅_i)>Δ_*.
The case x_0∈ (0, f^2_m)∪ (f_m, ∞) follows from Lemma <ref>.
(ii) Basically we repeat the proof of Part (i) for a=b=δ. The necessary number of steps in this case is max{N̅_1, N̅_2}, where N̅_1 is the first moment when the solution gets into [K-δ, K), and N̅_2 is the first moment when the solution gets into (K, K+δ), if both numbers N̅_1, N̅_2 are finite. If one of them is infinite, say N̅_2=∞, we put S_1=N̅_1. In other words,
S_1:=max{N̅_1, N̅_2} if N̅_2, N̅_1<∞, S_1=N̅_1 if N̅_2=∞, and S_1=N̅_2 if N̅_1=∞.
§.§ Proof of Theorem <ref>
For any (α, ℓ) satisfying (<ref>) we have 0<α_0<α-ℓ<α_n<α+ℓ<1, for all n∈ℕ, and β_*-α/ℓ<1.
We consider only α<β_*, since otherwise the solution to (<ref>) is globally asymptotically stable for ℓ=0. Then, by Assumption <ref>,
ℙ(Ω̂)>0, where Ω̂={ω∈Ω:ξ(ω)∈( β_*-α/ℓ, 1]},
and by Lemma <ref> there is a random moment 𝒩 s.t.
ξ_𝒩+S_0, ξ_𝒩+1+S_0, …, ξ_𝒩+S_1+S_0∈( β_*-α/ℓ, 1],
where S_0 and S_1 are from Lemma <ref> and Theorem <ref> (ii), respectively, S_0=S_0(α-ℓ, α+ℓ, x_0), S_1=S_1(x_0, α-ℓ, δ_0).
Fix some k∈ℕ, set
Ω_k={ω∈Ω: 𝒩=k }={ω∈Ω: ξ_k+i+S_0∈( β_*-α/ℓ, 1], i=0, …, S_1},
and note that Ω_k is defined by ξ_k+S_0, ξ_k+1+S_0, …, ξ_k+S_1+S_0.
By Lemma <ref>, x_S_0+k∈[f^2_m, f_m] on all Ω.
Let y be a solution to
y_n+1=G(α̅_n, y_n), y_0=x_S_0+k, α̅_n=α+ℓξ_S_0+k+n,
considered path-wise on Ω_k.
Since α_n=α+ℓξ_n>β_* for n=S_0+k+1, S_0+k+2, …,S_0+k+S_1,
Theorem <ref> (ii) implies x_S_0+k+S_1=y_S_1∈ (K-θ, K+θ) on Ω_k. Since α>α_0, as soon as x gets into (K-θ, K+θ), it tends to K.
So, for each ω∈Ω_k we get lim_n→∞ x_n=K. Since Ω=∪ _k=1^∞Ω_k, this completes the proof.
§.§ Proof of Lemma <ref>
As we have assumed everywhere, L̃^-≥L̃^+ and by Lemma <ref>, ℒ̃^+(α)<1, ℒ̃^-(α)ℒ̃^+(α)<1. In order to keep solution inside of (K-θ, K+θ), where inequalities (<ref>) can be applied, we need to decrease the left part of the interval. So, if x∈ (K-θ/ ℒ̃^-(α), K) and G(α, K)>K, we have
G(α, K)-K≤ℒ̃^-(α)(K-x)<ℒ̃^-(α) θ/ℒ̃^-(α)
=θ,
i.e. G(α, K)≤ K+θ.
§.§ Proof of Theorem <ref>
Suppose that the statement of theorem does not hold, i.e. there exists a pair (α, ℓ) satisfying (<ref>), (<ref>), and (<ref>) such that
Without loss of generality we can assume that κ∈ (0, 2/3).
In the proof below we consider a solution to (<ref>) with (α, ℓ)
and x_0 satisfying (<ref>).
Note that, once x_n=K, all x_n+j=K, j ∈ℕ, so we only have to consider the case x_n ≠ K,
n ∈ℕ.
(i) We start with the proof that a solution to (<ref>) changes sides of K at each step.
We have
G(β, x)-K=(1-β)(f(x)-K)+β(x-K)≥ [(1-β)a_1-β](K-x), x∈ [K-θ, K],
K-G(β, x)=(1-β)(K-f(x))+β(x-K)≥ [(1-β)a_2-β](x-K), x∈ [K, K-θ].
For β=α+ℓξ we get α-ℓ≤β≤α+ℓ, so
(1-β)a_1-β=a_1-β(a_1+1)≥ a_1-(α+ℓ)(a_1+1), (1-β)a_2-β≥ a_2-(α+ℓ)(a_2+1),
which, along with (<ref>), implies (G(α+ℓξ_n, x)-K)(K-x)>0, x ∈ (K-θ, K)∪(K, K+θ), n∈ℕ. Therefore, as soon as a solution remains in (K-θ, K+θ), it changes position relative to K at each step.
Since
G(α+ℓξ_n, x)-K≤ℒ^-(α+ℓξ_n)(K-x), x∈ (K-θ, K),
K-G(α+ℓξ_n, x)≤ℒ^+(α+ℓξ_n)(x-K), x∈ (K, K+θ),
we conclude that ℒ^-(α+ℓξ_i)>0 and ℒ^+(α+ℓξ_i)>0,
so we can omit the absolute value sign in (<ref>).
(ii) Now, let us prove local stability.
Consider the sequence (u_i)_i∈ℕ of i.i.d. variables
u_i:=ln[ℒ^-(α+ℓξ_i) ℒ^+(α+ℓξ_i+1)], i∈ℕ.
By monotonicity of ℒ, see Lemma <ref> (i), we have ℒ^-(α+ℓξ_t+i)≤ℒ^-(α-ℓ), ℒ^+(α+ℓξ_t+i)≤ℒ^+(α-ℓ),
for any i∈ℕ.
By (<ref>) we have 𝔼 u_i=-λ_0. Based on Corollary <ref>, for each ν>0 we can find a nonrandom number N̅=N̅(ν) such that, ℙ{Ω_ν, 0}> 1-ν, where
Ω_ν, 0:={ω∈Ω: ∑_i=1^n u_i≤ -λ_0 n/2 for all n≥N̅}.
For t∈ℕ, set
Ω_ν, t:={ω∈Ω: ∑_i=t^t+n-1 u_i≤ -λ_0 n/2 for all n≥N̅}.
In general, Ω_ν, 0≠Ω_ν, t for t≠ 0, but since u_i are identically distributed, we have ℙ(Ω_ν, 0)= ℙ(Ω_ν, t)>1-ν for each t∈ℕ. Also,
e^∑_i=t^t+n-1 u_i≤ e^-λ_0 n/2, for all n≥N̅, on Ω_ν, t.
For κ from (<ref>), we set
ν:=κ/4, δ_0:=θℬ^-N̅, ℬ:=max{ℒ^-(α-ℓ), ℒ^+(α-ℓ)}.
Let us demonstrate that, as soon as x_t∈ (K-δ_0, K+δ_0), we get lim_n→∞ x_n=K on Ω_ν, t.
Assume for simplicity that N̅ and s are even, N̅=2M̅, s=2d.
We have Ω=Ω^+(t)∪Ω^-(t), where
Ω^+(t)={ω∈Ω:x_t(ω)∈ (K, K+δ_0) }, Ω^-(t)={ω∈Ω:x_t(ω)∈ (K-δ_0, K) }.
On Ω^-(t)∩Ω_ν, t we have x_t+1>K, x_t+2<K, etc, if the solution remains in (K-δ_0, K+δ_0), so,
inductively,
x_t+1-K ≤ℒ^-(α+ℓξ_t+1)[K-x_t]≤ℒ^-(α-ℓ)[K-x_t]<ℬδ_0<θ,
K-x_t+2 ≤ℒ^+(α+ℓξ_t+2)|x_t+1-K|< ℒ^+(α+ℓξ_t+2) ℒ
^-(α+ℓξ_t+1)|x_t-K|
=e^u_t|x_t-K| <ℬ^2δ_0<θ,
………………
|x_t+N̅-K| ≤ e^∑_i=t^t+M̅-1u_i|x_t-K|≤ℬ^N̅δ_0=θ.
Using (<ref>) and continuing estimations, we arrive at
|x_t+N̅+d-K|= |x_t+2(M̅+s)-K|
≤exp{∑_i=t^t+M̅+s-1u_i} |x_t-K|<e^-λ_0 (M̅+s)/2δ_0<θ.
Similar inequalities can be obtained on Ω^+(t)∩Ω_ν, t. So, |x_t+n-K|≤ e^-λ_0 (n)/2δ_0→ 0, as n→∞, on Ω_ν, t.
(iii) Now proceed to the proof of global attractivity.
Let S_0=S_0(α-ℓ, α+ℓ, x_0), S_1=S_1(x_0, α-ℓ, δ_0) be from Lemma <ref> and Theorem <ref> (ii), respectively. Note that δ_0 was chosen as in (<ref>), so δ_0 and S_1(x_0, α-ℓ, δ_0) depend on κ>0, which is the lower estimate for the probability of the set Ω_κ, where x_n ↛K.
Recall that 0<α-ℓ<α_n<α+ℓ<1, for all n∈ℕ, and β_*-α/ℓ<1.
Reasoning as in the proof of Theorem <ref> and using the same notations (<ref>) and (<ref>) for the random moment 𝒩 and sets Ω_k, respectively, we conclude that x_S_0+j+S_1∈ (K-δ_0, K+δ_0), on Ω_j. Denoting t=t(j):=S_0+j+S_1,
and considering ν and Ω_ν, t defined as in (<ref>) and (<ref>), respectively, with ℙ(Ω_ν, t)>1-ν, we arrive at
lim_n→∞ x_n=K, on Ω_ν, t∩Ω_k.
Since Ω_ν, t is defined by {ξ_i, i > t(j)=S_0+j+S_1}, while Ω_k consists of {ξ_i, i≤ t(j)=S_0+j+S_1}, by independence of ξ_i and by definition (<ref>), we have
ℙ( Ω_ν, t(k)∩Ω_k)= ℙ( Ω_ν, t(k)) ℙ(Ω_k)≥
(1-ν) ℙ(Ω_k)=(1-κ/4)ℙ(Ω_k).
Since ∪_j=1^∞Ω_j=Ω, we can choose j_κ s.t. ∪_j=1^j_κℙ(Ω_j )>1-κ/2/1-κ/4. Letting Ω̃:=∪_j=1^j_κ[ Ω_ν, t(j)∩Ω_j], we arrive at
ℙ(Ω̃)= ℙ(∪_k=1^k_κ[ Ω_ν, t(k)∩Ω_k])=∑_k=1^k_κℙ(Ω_ν, t(k)∩Ω_k)≥ (1-κ/4)∑_k=1^k_κℙ(Ω_k)>1-κ/2
and lim_n→∞ x_n=K, on Ω̃. However, by our assumption in (<ref>), we should have
Ω̃⊂Ω∖Ω_κ, so 1-κ/2≤ℙ(Ω̃)≤ℙ(Ω∖Ω_κ)=1-κ. The contradiction proves that lim_n→∞ x_n=K a.s.
§.§ Proof of Theorem <ref>
Since f is continuously differentiable at K and L_0>1, for each ε∈ (0, 1)
there exists θ=θ(ε)>0 such that
(L_0-ε)(K-x)<f(x)-K<(L_0+ε)(K-x), x∈ (K-θ, K),
(L_0-ε)(x-K)< K-f(x)<(L_0+ε)(x-K), x∈ (K, K+θ).
Relations (<ref>) imply that
(f(x)-K)(K-x)>(L-ε)(K-x)^2>0, x∈ (K-θ, K)∪ (K, K+θ),
and that conditions (<ref>) hold with a_1=L_0-ε=a_2.
Assume that (<ref>) holds, set
θ_1:=L_0/L_0+1-α-ℓ>0
and find ε_1∈ (0, 1) s.t., for each ε∈ (0, ε_1),
L_0+ε/L_0+ε+1-α-ℓ>θ_1/2 .
Denote, for simplicity of calculations,
ℳ:=(1-α-ℓξ)L_0-α-ℓξ, ℳ(ε):=(1-α-ℓξ)(L_0+ε)-α-ℓξ,
then, for ε∈ [0, ε_1),
ℳ(ε) >(L_0+ε)- (α+ℓ)(L_0+ε+1)
>(L_0+ε)- [ L_0+ε/L_0+ε+1-θ_1/2](L_0+ε+1)
=θ_1/2(L_0+ε+1)>0.
Acting as in the proof of Theorem <ref>, (i), we obtain
(G(α+ℓξ, x)-K)(K-x)>0, x∈ (K-θ, K)∪ (K, K+θ).
Using (<ref>), for x∈ (K-θ, K) we get G(α+ℓξ, x)-K>0 and
G(α+ℓξ, x)-K=(1-α-ℓξ)(f(x)-K)+(α+ℓξ)(x-K)≤ℳ(ε) (K-x),
while for x∈ (K, K+θ) we obtain G(α+ℓξ, x)-K<0 and
K-G(α+ℓξ, x)=(1-α-ℓξ)(K-f(x))-(α+ℓξ)(x-K)≤ℳ(ε) (x-K),
which leads to |G(α+ℓξ, x)-K|<ℳ(ε) |K-x|. Now,
lnℳ(ε)=ln[ ℳ×(1+(1-α-ℓξ)ε/ℳ)]=lnℳ+
ln(1+(1-α-ℓξ)ε/ℳ).
Choosing ε<θ_1λ_1/4(L_0+1), where λ_1 is from (<ref>), applying the inequality ln (1+x)<x, |x|<1 and (<ref>), we arrive at
(1-α-ℓξ)ε/ℳ≤ε/θ_1/2(L_0+1) <λ_1/2<1, ln(1+(1-α-ℓξ)ε/ℳ)<(1-α-ℓξ)ε/ℳ <λ_1/2,
and then, using (<ref>), we get
𝔼lnℳ(ε)<𝔼lnℳ+
ln(1+(1-α-ℓξ)ε/ℳ)≤ -λ_1/2.
When we do not assume that α+ℓ< L_0/L_0+1 and only (<ref>) holds, we cannot guarantee that ℳ(ε)>0. In this case, by (<ref>), for x∈ (K-θ, K) and G(α+ℓξ, x)-K>0, we get
|G(α+ℓξ, x)-K|=(1-α-ℓξ)(f(x)-K)+(α+ℓξ)(x-K)≤ (1-α-ℓξ)(L_0+ε)(K-x),
while, for x∈ (K-θ, K) and G(α+ℓξ, x)-K<0, by (<ref>) we get f(x)>K, and then
|G(α+ℓξ, x)-K|=-(1-α-ℓξ)(f(x)-K)-(α+ℓξ)(x-K)≤ (α+ℓξ)(K-x).
Similar estimates are applied to two other cases: x∈ (K, K+θ), G(α+ℓξ, x)-K>0 and G(α+ℓξ, x)-K<0. All the above gives us
|G(α+ℓξ, x)-K|<max{(1-α-ℓξ)(L_0+ε), (α+ℓξ)}|K-x|.
Assuming that ε<λ_1/2(L_0) and using ln(1+x)<x for x ∈ (0,1), we obtain
ln[(1-α-ℓξ)(L_0+ε)]=ln[(1-α-ℓξ) L_0]+ln[1+ε/L_0]<ln[(1-α-ℓξ)L_0]+λ_1/2.
Applying the inequality max{a+ϵ, b}≤max{a, b}+ϵ, ϵ>0, and (<ref>), we conclude
𝔼lnmax{(1-α-ℓξ)(L_0+ε), (α+ℓξ)}≤𝔼max{ln[(1-α-ℓξ)L_0]+λ_1/2, ln (α+ℓξ)}
≤ 𝔼max{ln[(1-α-ℓξ)L_0], ln (α+ℓξ)} +λ_1/2<-λ_1/2 .
In both cases the rest of the proof is the same as in Theorem <ref>.
§.§ Proof of Theorem <ref>
Denote by ψ the probability density function (or the probability mass function in a discrete case) of the random variable ξ and let μ_2 be its second moment. Since the distribution is symmetric, we have ψ(u)=ψ(-u), u∈ [-1,1],
μ_2=∫_-1^1u^2ψ(u)du in the continuous case and μ_2=∑_i=1^∞ u_i^2ψ(u_i) in the discrete case.
Choose
0<ℓ_0<min{2/μ_2, 1/L_0+1, L_0-1/(1+μ_2/2)(L_0+1)},
α∈(α_0-ℓ_0^2μ_2/2, α_0 ),
ℓ∈(ℓ_0, min{α, 1/L_0+1}).
Note that the second interval in (<ref>) is not empty. Indeed,
α_0-ℓ_0^2μ_2/2>ℓ_0, ℓ_0[1+ ℓ_0μ_2/2]<ℓ_0[1+ μ_2/2]< L_0-1/(1+μ_2/2)(L_0+1)[1+ μ_2/2]=α_0,
α>α_0-ℓ_0^2μ_2/2>ℓ_0 and ℓ_0<1/L_0+1. Also,
α+ℓ<α_0+ℓ<L_0/L_0+1, α+ℓ>α_0-ℓ_0^2μ_2/2+ℓ_0>α_0,
where the second inequality is true since ℓ_0<2/μ_2.
So we need to prove only the second relation in (<ref>).
By Lemma <ref> (vi), we have ℒ_0(α_0)
=1, so lnℒ_0(α_0)=0. For any α, ℓ satisfying (<ref>), we get α+ℓ<L_0/L_0+1,
ℒ_0(α+ℓξ)
>0,
ℒ_0(α+ℓξ)=ℒ_0(α)-ℓ (L_0+1)ξ,
ℒ_0(α_0+ℓξ)=1-ℓ (L_0+1)ξ, ℒ_0^2(α)>ℒ_0^2(α_0)=1,
0 <ℒ_0^2(α)-1=[L_0-α(1+L_0)+1][L_0-α(1+L_0)-1]=[1-α](1+L_0)[L_0-1/(1+L_0)-α](1+L_0)
=(1+L_0)^2(1-α)(α_0-α)
<(1+L_0)^2(α_0-α).
Applying the inequality ln (1-x)<-x, x∈ (0, 1), we get, for u∈ [-1, 1],
ln [1-ℓ^2 (L_0+1)^2u^2] <- ℓ^2 (L_0+1)^2u^2,
ln [ℒ_0^2(α)-ℓ^2 (L_0+1)^2u^2] <ln [1+(L_0+1)^2(α_0-α)-ℓ^2 (L_0+1)^2u^2]
<(L_0+1)^2(α_0-α)-ℓ^2 (L_0+1)^2u^2.
Let now ξ have a continuous distribution, then μ_2=∫_-1^1u^2ψ(u)du. Applying (<ref>), we get
𝔼ln [ℒ_0(α+ℓξ)]=𝔼ln [L_0-(α+ℓξ)(L_0+1)]
=∫_-1^1ln [ℒ_0(α)-ℓ (L_0+1)u]ψ(u)du
=∫_0^1ln [ℒ_0(α)-ℓ (L_0+1)u]ψ(u)du-∫_1^0ln [ℒ_0(α)+ℓ (L_0+1)y]ψ(-y)dy
= ∫_0^1ln [ℒ_0^2(α)-ℓ^2 (L_0+1)^2u^2]ψ(u)du< ∫_0^1ln [1+(L_0+1)^2(α_0-α)-ℓ^2 (L_0+1)^2u^2]ψ(u)du
<(L_0+1)^2(α_0-α)∫_0^1ψ(u)du-ℓ^2_0 (L_0+1)^2∫_0^1u^2ψ(u)du=(L_0+1)^2(α_0-α)/2- ℓ^2_0 (L_0+1)^2μ_2/2
<(L_0+1)^2ℓ_0^2μ_2/2/2- ℓ^2_0 (L_0+1)^2μ_2/2= - ℓ^2_0 (L_0+1)^2μ_2/4<0,
which proves the second inequality in (<ref>).
Let now ξ be a discrete random variable with an at most countable number of states
{u_1, -u_1, …, u_m, -u_m, …}, u_i∈ [-1,1].
Recall that its probability mass function ψ(u), u∈ℝ, is defined as follows: ψ(u)=0 when u≠ u_i, ψ(± u_i)=P{ξ=± u_i}, where ∑_m=1^∞ψ(u_m)= 1/2.
Choose α, ℓ as in (<ref>) and (<ref>) and
denote H:=max_u∈ [-1, 1]|ln [ℒ_0(α)-ℓ (L_0+1)u]|.
Since ∑_i=1^∞ψ(u_i) is convergent, we can find N_1∈ℕ s.t.
∑_i=N_1+1^∞ψ(u_i)<ℓ_0^2(L_0+1)^2μ_2/(16H), ∑_i=1^N_1 u_i^2ψ(u_i)>7μ_2/16.
The series ∑_i=1^∞ln [ℒ_0(α)±ℓ (L_0+1)u_i]ψ (u_i) is absolutely
convergent, so we can estimate
∑_i=N_1+1^∞ln [ℒ_0(α) ±ℓ (L_0+1)u_i]ψ (u_i)
≤ H∑_i=N_1+1^∞ψ (u_i)<ℓ_0^2(L_0+1)^2μ_2/16.
Further,
𝔼ln [ℒ_0(α+ℓξ)] = ∑_i=1^N_1ln [ℒ_0(α)-ℓ (L_0+1)u_i]ψ (u_i)+∑_i=1^N_1ln [ℒ_0(α)+ℓ (L_0+1)u_i]ψ (u_i)
+∑_i=N_1+1^∞ln [ℒ_0(α)-ℓ (L_0+1)u_i]ψ (u_i)+∑_N_1+1^∞ln [ℒ_0(α)+ℓ (L_0+1)u_i]ψ (u_i)
< ∑_i=1^N_1ln [ℒ_0^2(α)-ℓ ^2(L_0+1)^2u_i]^2ψ (u_i)+ℓ_0^2(L_0+1)^2μ_2/8.
Now, applying (<ref>), (<ref>), (<ref>) and acting as in (<ref>), we arrive at
𝔼ln [ℒ_0(α+ℓξ)]
<∑_i=1^N_1ln [1+(L_0+1)^2(α_0-α)-ℓ^2 (L_0+1)^2 u_i^2]ψ (u_i)+
ℓ_0^2(L_0+1)^2μ_2/8
<(L_0+1)^2(α_0-α)∑_i=1^N_1ψ (u_i) -ℓ^2_0 (L_0+1)^2∑_i=1^N_1 u_i^2ψ (u_i)+ℓ_0^2(L_0+1)^2μ_2/8
≤(L_0+1)^2/2[ℓ_0^2μ_2/2-ℓ^2_0 7μ_2/8 +ℓ^2_0 μ_2/4 ]= -ℓ_0^2(L_0+1)^2μ_2/8 <0,
which is the second inequality in (<ref>). The reference to Theorem <ref> concludes the proof.
§.§ Proof of Proposition <ref>
Let and ℒ^± be defined by (<ref>) and Ψ (·, ·)
by (<ref>).
Since β_0=Ψ (L^-, L^+), see (<ref>), by Lemma <ref> (vi), we have ℒ^-(β_0)ℒ^+(β_0)=1, and also ℒ^±(α)<ℒ^±(β_0). Define
ϕ(x)=ℒ^-(β_0)(K-x)+K, x_m ≤ x≤ K, ϕ(x)=-ℒ^+(β_0)(x-K)+K, K≤ x≤ℒ^-(β_0)(K-x_m)+K,
which is decreasing, ϕ(x)>K, x_m ≤ x < K, ϕ(x)<K, K < x ≤ℒ^-(β_0)(K-x_m)+K
and ϕ(ϕ(x))=x.
Since ϕ(ℒ^-(β_0)(K-x_m)+K)=x_m>0, the function ϕ is also positive.
By (<ref>) and since ℒ^±(α)<ℒ^±(β_0), we get, for α>β_0,
G(α, x)≤ℒ^-(α) (K-x)+K<ϕ(x), x∈ (0, K), G(α, x)≥- ℒ^+(α) (x-K)+K>ϕ(x), x∈ (K, f_m),
and the result follows from Lemma <ref>.
§.§ Proof of Proposition <ref>
To construct an envelope ϕ for G(α, x), α > α_0, we estimate
G(α_0, x)-G(α_0, a_i)≤ L_i^-(1-α_0)(a_i-x)-α_0(a_i-x)=ℒ^-_i(α_0)(a_i-x), a_i+1≤ x≤ a_i,
G(α_0, b_i(α_0))-G(α_0, x)≤ L_i^+(1-α)(x-b_i(α_0))-α(x-b_i(α_0))=ℒ^+_i(α_0)(x-b_i(α_0)), b_i(α_0)≤ x.
Set α_i :=Ψ(L_i^-, L_i^+), then (<ref>) implies that α_i≤α_0.
Since ℒ^±_i(α)≥ℒ^±_i(α_0) when α≤α_0, and ℒ^-_i(α_i) ℒ^+_i(α_i)=1, see Lemma <ref> (vi),
we get that ℒ^-_i(α_0) ℒ^+_i(α_0)≤ 1. Setting
C_i^-:=ℒ^-_i(α_0), C_i^+:=[ℒ^-_i(α_0)]^-1,
we get C_i^-C_i^+=1, C_i^+≥ℒ^+_i(α_0) and
G(α_0, x)-G(α_0, a_i)≤ C_i^- (a_i-x), a_i+1≤ x≤ a_i, G(α_0, b_i(α_0))-G(α_0, x)≤ C_i^+(x-b_i(α_0)), b_i(α_0)≤ x.
Define ϕ, as in (<ref>), and by straightforward calculations, show that
G(α_0, x)≤ϕ(x), x∈ (0, K), G(α_0, x)≥ϕ(x), x>K.
Since for α > α_0,
G(α, x)< G(α_0, x), x∈ (0, K), G(α_0, x) <G(α, x), x>K,
we conclude that ϕ is an envelope for G(α, x), therefore lim_n→∞x_n=K,
by Lemma <ref>.
§.§ Remark to Proposition <ref>
If there is i_0<m s.t. α_0<Ψ(L_i_0^-, L_i_0^+ ), we can find a bigger parameter α̅ which guarantees global stability of the solution to (<ref>). To show that we denote
f(x)-f(y)≤ L_i^-(y-x), a_i+1≤ x<y≤ a_i, i=0, 1, …, m-1,
L^+(z), z>K, f f(y)-f(x)<L^+(z)(x-y), ∀ x>y≥ z,
b_0=K, b_1=b_1(α_0):=max_x∈ [a_1, K]{G(α_0, x)},
b_i=b_i(α_i-1)=max_x∈ [a_i, K]{G(α_i, x ) }, where α_i are defined inductively:
α_1:=max{α_0, Ψ(L_1^-, L^+(b_1))}, …, α_k:=max{α_0, Ψ(L_i^-, L^+(b_i)), i=1, …, k}.
Set α̅:= max_ i=0, 1, …, m-1{α_i}, ℒ̃^+_i(α):=(1-α)L^+(b_i)-α and note that lim_n→∞ x_n=K, for any α>α̅ and x_0∈ (a_1, b_1).
We want to get the same for each x_0>0. Fix some α>α̅ and assume the contrary: for some k<m we have stability on (a_k, b_k) but x̅:=inf{x∈ (a_k+1, a_k): G^2(α, x)>x }≥ a_k+1. This implies that
G^2(α, x̅)=x̅. By the inductive assumption for (a_k, b_k) we get G(α, x̅) >b_k . Also, G(α, x̅) <b_k+1 since α>α̅> α_k+1 and therefore G(α, x̅)<G(α_k+1, x̅)≤ b_k+1, where the last inequality holds by the definition of b_k+1.
Choose x̂∈ (x̅, a_k) s.t. G(α, x̂)∈ (b_k, b_k+1). Then x̂>x̅, G^2(α, x̂)>x̂, and
G(α, x̅)-G(α, x̂)≤ℒ^-_k(α)(x̂-x̅).
Assuming G(α, x̂)≤ G(α, x̅) we get
x̂- G^2(α, x̅) <
G^2(α, x̂)-G^2(α, x̅)=G(α, G(α, x̂))-G(α, G(α, x̅))
≤ℒ̃_k^+(α)[G(α, x̅)-G(α, x̂)]≤ℒ̃_k^+(α)ℒ^-_k(α)(x̂-x̅)<x̂-x̅ G^2(α, x̅)> x̅,
contradicting to the definition of x̅.
If however, G(α, x̂)> G(α, x̅), we can find x̃∈ (x̂, K)∈ (x̅, K) s.t. G(α, x̅)=G(α, x̃). But then x̅=G^2(α, x̅)=G^2(α, x̃)>x̃,
which contradicts to the choice of x̃.
§.§ Proof of Proposition <ref>
Define
𝒰(x, α_0):=G^2(α_0, x)-x, x∈ [x_m, K].
Note that 𝒰(K, α_0)=0 and 𝒰_x'(x, α_0)=G'(α_0, G(α_0, x))G'(α_0, x)-1=[(1-α_0)f'( G(α_0, x)) +α_0][(1-α_0)f'(x) +α_0]-1.
Fix some x∈ (x_max, K) and note that the equation V(u)=0, with
V(u)=V(x, u) := [f'( G(α_0, x))+u(1-f'( G(α_0, x)))][f'(x) +u(1-f'(x))]-1
has two real roots, 1 and Ψ(f'(G(α_0, x)), f'(x) )=f'(G(α_0, x)) f'(x)-1 /(1-f'(G(α_0, x)), (1-f'(x)) ≤ 1,
by Lemma <ref> (v).
Also, V(u)<0 when u∈(Ψ(f'(G(α_0, x), f'(x) ), 1). By (<ref>), we have
α_0∈(Ψ(f'(G(α_0, x), f'(x)) ), 1), which implies that
𝒰'_x(x, α_0)=V(α_0)< 0,
so 𝒰(x, α_0) decreases in x. Therefore, for each x∈ (x_max, K),
G^2(α_0, x)-x=𝒰(x, α_0)> 𝒰(K, α_0)=0.
Set
G_max(α_0)=max_x∈ [0, K]G(α_0,x), x_Gmax .
For each y∈ (K, G_max(α_0)) there is x∈ (0, K) s.t. y=G(α_0, x). Due to continuity we can choose x∈ (x_Gmax, K)⊆ (x_max, K). Since G^2(α_0, x)-x=𝒰(x, α_0)>0, we conclude that G^2(α_0, x)>x and therefore x<G(α_0, y). If G^2(α_0, y)>y, there is a point x̂∈ (G(α_0, y), K) s.t. y=G(α_0, x)=G(α_0, x̂), so x<G^2(α_0, x)=G^2(α_0, x̂)=G(α_0, y)<x̂, or G^2(α_0, x̂)<x̂, which is a contradiction to the case proved above.
When x∈ (0, x_max) there exists x̅∈ (x_max, K) s.t. G(α_0, x)=G(α_0, x̅) and we are in the first case. The case x>G_max(α_0) is treated as in Lemma <ref>.
Application of Lemma <ref> proves that any control α>α_0 guarantees global stability.
|
http://arxiv.org/abs/2307.02960v1
|
20230706124948
|
Parametric 3D Convolutional Autoencoder for the Prediction of Flow Fields in a Bed Configuration of Hot Particles
|
[
"Ali Mjalled",
"Reza Namdar",
"Lucas Reineking",
"Mohammad Norouzi",
"Fathollah Varnik",
"Martin Mönnigmann"
] |
physics.flu-dyn
|
[
"physics.flu-dyn",
"math.DS"
] |
[
[
=====
1 Automatic Control and Systems Theory, Ruhr-Universität Bochum, Universitätstraße 150, Bochum, 44801, Germany
2 Interdisciplinary Centre for Advanced Materials Simulation, Ruhr-Universität Bochum, Universitätstraße 150, Bochum, 44801, Germany
The use of deep learning methods for modeling fluid flow has drawn a lot of attention in the past few years.
In situations where conventional numerical approaches can be computationally expensive, these techniques have shown promise in offering accurate, rapid, and practical solutions for modeling complex fluid flow problems.
The success of deep learning is often due to its ability to extract hidden patterns and features from the data, enabling the creation of data-driven reduced models that can capture the underlying physics of the domain.
We present a data-driven reduced model for predicting flow fields in a bed configuration of hot particles.
The reduced model consists of a parametric 3D convolutional autoencoder.
The neural network architecture comprises two main components.
The first part resolves the spatial and temporal dependencies present in the input sequence, while the second part of the architecture is responsible for predicting the solution at the subsequent timestep based on the information gathered from the preceding part.
We also propose the utilization of a post-processing non-trainable output layer following the decoding path to incorporate the physical knowledge, e.g., no-slip condition, into the prediction.
The evaluation of the reduced model for a bed configuration with variable particle temperature showed accurate results at a fraction of the computational cost required by traditional numerical simulation methods.
Keywords: Reduced model, Lattice Boltzmann, Neural networks, Autoencoder
§ INTRODUCTION
Over the past few decades, numerous experimental and numerical studies have been conducted to understand the flow behavior and heat transfer of fluids around solid bodies.
Within this broad spectrum of research, great emphasis has been given to flow fields in packed beds, where the transfer of heat, mass and momentum are increased by providing an extended interfacial surface between the fluid and solid bodies.
Packed beds play a crucial role in a variety of areas, ranging from chemical processing such as separation <cit.>, absorption <cit.> and catalytic processes <cit.> to energy storage applications like sensible heat storage systems <cit.>, and advanced adiabatic compressed air energy storage <cit.>.
This wide range of applications as well as the effect of transfer phenomena on process efficiency, product quality and environmental sustainability <cit.>, motivate investigating the key features of packed beds such as flow field, pressure drop, packing material, bed height, and packing density to ensure optimal performance.
Numerical methods are one of the most effective tools for analyzing the behavior of packed beds and optimizing their design parameters.
The discrete element method (DEM), computational fluid dynamics (CFD) and the lattice Boltzmann method (LBM) are some of the common approaches that have been used to simulate packed bed configurations.
DEM simulations have been applied to examine both the geometrical features of the packed beds and the movement of particles within them.
For example, <cit.> used DEM to examine the effect of shape, restitution coefficient, sliding coefficient, rolling coefficient, and diameter ratio of cubical particles on the behavior of the packed beds.
CFD simulations, on the other hand, can capture the detailed flow field within the bed and the interaction between the fluid and particles.
<cit.> investigated flow field and heat transfer for a slender packed bed under various conditions.
Moreover, a combination of different methods can be adopted.
<cit.> used a coupled DEM-CFD model to study flow field and pressure drop in fixed bed reactors.
LBM, which is a relatively a new numerical method, has been utilized to simulate the interactions between fluid and particles, as well as the chemical reactions occurring within packed beds.
<cit.>, for example, used LBM to investigate non-Newtonian fluid flow through a packed bed of uniform spheres.
The aforementioned numerical methods offer a more efficient and cost-effective alternative to experimental analysis for modeling diverse, packed bed configurations with varying properties.
However, simulating large-scale packed beds using these methods requires substantial computational resources, making it a time-consuming and expensive process.
Even with highly parallelizable techniques such as LBM, the simulation of large and complex packed beds can be limited, particularly when it comes to optimization problems.
In this context, building reduced models for packed bed simulations is an essential step toward reducing the computational cost and allowing for sensitivity analysis.
The majority of model reduction approaches rely on mapping the high dimensional space into a smaller one on which the reduced model is defined.
One of the most commonly employed methods in this context is based on proper orthogonal decomposition (POD) followed by a Galerkin projection (see, e.g., <cit.>, for a survey, and <cit.> for an engineering application).
This method uses POD to extract a set of dominant modes that serve as a basis for the reduced space. Subsequently, Galerkin projection is employed to extract a small number (order of ten) of ordinary differential equations by projecting the governing equations into the reduced space.
However, the POD-Galerkin method requires access to the governing equations of the model, which may not be available in many cases.
Therefore, significant work has been dedicated to mitigate this dependency by developing complete data-driven reduced modeling frameworks using artificial intelligence (e.g., <cit.>).
Many of the methods developed rely on replacing the Galerkin projection step with a deep neural network that predicts the temporal evolution of the coefficients associated with the POD modes.
In light of this, various architectures have been used, including deep feedforward (e.g., <cit.>), convolution (e.g., <cit.>) and recurrent (e.g., <cit.>) neural networks, to name just a few.
Despite the advantage of POD as an effective tool to reduce the dimensionality of data, its linear basis approximation makes it incompatible with highly nonlinear systems where a large number of modes is required to capture the most dominant energy content of the data.
Large numbers of modes are required because nonlinear phenomena are approximated as a linear combination of the POD modes.
On the other hand, there are several available approaches to learning nonlinear manifolds, such as Kernel PCA <cit.> and Laplacian eigenmaps <cit.>.
These techniques, however, do not provide an analytical relationship to reconstruct the original data from the compressed representation.
An alternative approach consists of using autoencoders (AEs) as a tool for nonlinear dimensionality reduction (see, e.g., <cit.>, for a survey).
AEs are a special type of unsupervised neural network that are trained to learn hidden features from an input.
Their architecture consists of two parts, namely, the encoder and the decoder.
The former compresses the high-dimensional input into a low-dimensional representation, while the latter learns how to reconstruct the input.
AEs have been extensively used to build data-driven reduced models of flow fields around solid bodies.
<cit.> investigated the ability of AEs for dimensionality reduction of flow field data in the wake of a two-dimensional cylinder.
However, his work focused only on data compression and did not predict temporal evolution.
In contrast, <cit.> developed a multi-level AE reduced model that addresses all aspects of the flow, including dimensionality reduction, temporal evolution, and parameter variation.
<cit.> developed a hybrid AE-LSTM reduced model for the flow field prediction around submarines, where the LSTM (long short-term memory) layer has been utilized to resolve the dynamics of the compressed space.
<cit.> proposed a method to construct AE-based reduced models for unsteady flows around bluff bodies of various shapes.
Apart from AEs, other neural network architectures have been used to approximate flow fields around different shapes, such as airfoils <cit.> and automobiles <cit.>.
Different variations of AEs have been used for reduced flow field modeling.
In their pioneering work, <cit.> developed a reduced model framework based on 3D convolutional AE for the temporal evolution of fluid simulation.
The model demonstrated accurate results across various simulations, including the flow field prediction around a 2D circular cylinder.
In this work, we build upon their findings and extend the framework by making it parametric.
This modification empowers the model to not only reproduce known simulations but also to predict the flow field corresponding to a new simulation parameter.
We apply the extended framework to a bed configuration of hot particles, wherein we propose the usage of a post-processing non-trainable output layer to encompass the physical knowledge of the system into the prediction, e.g., no-slip condition at the fluid-solid interface.
The remainder of the paper is organized as follows. Sec. <ref> presents the numerical model for generating the simulation database. A comprehensive overview of AEs is given in Sec. <ref>. Sec. <ref> explains the framework used in this work to build a parametric data-driven reduced model based on 3D convolutional AE. The results are presented and discussed in Sec. <ref>. Finally, conclusions are provided in Sec. <ref>.
§ NUMERICAL MODEL
We present in this section the governing equations together with a brief explanation of the numerical model used in this study.
The model is validated by comparing the numerical results with benchmark data.
The results of the numerical model will serve as training data for the reduced model.
§.§ Governing equations and lattice Boltzmann-finite difference model
A variation of the Navier-Stokes equations is employed that accounts for the compressibility arising from the variation of temperature.
It should be noted that these equations hold true only in low Mach number flows.
The governing equations that ensure the conservation of mass, momentum and energy for this model are:
∂_t ρ + ∇· (ρu) = 0
∂_t (ρu) + ∇· (ρu⊗u) = - ∇P_h+ ∇·τ + F
∂_t ρ h + ∇·(ρu h) = -∇·q + ∂_t P_t
ρ = M_w P_t/R T,
where ρ, u, τ and F represent the density, velocity, viscous stress tensor and body force per unit volume of the gas, respectively.
Moreover, h, q, R, M_w and T stand for enthalpy, heat flux, universal gas constant (as the gas is assumed ideal in our simulation), molecular weight and temperature of the gas, respectively.
An important point to remark in this model is that pressure is divided into two parts, namely, hydrodynamic pressure (P_h) and thermodynamic pressure (P_t).
P_h considers the spatial and temporal variation of the pressure within the flow.
On the other side, P_t is uniform spatially and depends only on time.
To solve these equations, a combined lattice Boltzmann-finite difference (LB-FD) method was utilized.
In particular, the lattice Boltzmann method was employed to model the flow field, which takes into consideration heat expansion, while the energy equation was solved using the finite difference method.
The kinetic equations described below represent a modified version of the conventional lattice Boltzmann method that has the capability of recovering the thermal compressible form of Navier-Stokes equations beyond the Boussinesq approximation (see, e.g., <cit.>, for more details):
∂ g_α/∂ t + c_α·∇g_α = -1/τ_r(g_α-g_α^ eq) + Ξ_α,
Ξ_α = (c_α - u) · [∇ρ^2 (f^ eq/ρ - w_α) + f^ eq/ρ] + ρ^2w_α∇·u,
f_α^ eq = ρ w_α(1+1/^2c_α·u+1/2^4(c_α·u)^2 - 1/2^2u^2) ,
g_α^ eq = ^2 f_α^ eq + w_α (P_h-ρ^2) ,
where g_α, c_α, τ_r, w_α and f_α are the modified distribution function, discrete velocities, relaxation coefficient, weights, and the standard distribution function in the lattice Boltzmann model, respectively.
Sutherland's law is used to determine the dynamic viscosity μ, which is then used to compute the relaxation coefficients τ_r according to
μ=ρ^2(τ_r-0.5).
The reader is referred to <cit.> for more details about the numerical model.
§.§ Validation of the numerical model
To ensure the validity of the numerical model, two tests were conducted and their corresponding results are compared against literature data.
These tests involved simulating gas flow around a single 2D square or circular body at a Reynolds number of 100.
Known velocity and pressure conditions were applied at inlet and outlet boundaries, respectively. The no-slip boundary condition was also applied on the fluid-solid contact while upper and lower boundaries were subjected to periodic boundary conditions.
To evaluate the momentum exchange, the drag coefficient C_D was calculated for the isothermal case employing different grid resolutions.
We present the obtained results in Tab. <ref> and compare them to benchmark data. The results demonstrate the convergence of the mesh starting from Δ x = 3e-04m, and an agreement with the benchmark data is obtained within a tolerable deviation of 9%.
To validate the heat transfer rate, a similar analysis is performed to compute the Nusselt number Nu for a flow passing over square/circular body with 100K temperature difference.
The calculation of the local Nu and average Nu is according to the definitions given by <cit.>:
Nu =D/λ_0 (T_particle-T_inlet) (λ∂ T/∂ n)_surf.,
Nu = 1/A∫_ANu dA,
where λ and λ_0 are the heat conductivity of gas evaluated at temperature of the particle surface and mean temperature T_0=(T_particle+T_inlet)/2, respectively.
As the Prandtl number Pr is assumed to be constant, the heat conductivity is computed as λ=(μ c_p)/Pr.
The results depicted in Tab. <ref> show an agreement between the obtained Nu and the benchmark results within an accuracy of 2%.
Based on the results of Tabs. <ref> and <ref>, the grid defined with Δ x = 3e-04m will be used to generate the database of simulations required to build the reduced model.
§.§ Simulation setup
The setup of the simulation is shown in Fig. <ref>.
The spatial dimension is 2D, wherein three rows of circular particles are arranged at a specific distance from each other.
The corresponding simulation parameters are depicted in Tab. <ref>.
This simple configuration is chosen in a first step to investigate the ability of deep learning to build reduced models for flow fields in packed bed configurations.
Despite the relatively sparse arrangement of particles, the implementation of periodic boundary conditions at the side boundaries of the bed helps to minimize surface effects and thus mimic bulk behavior to a certain degree.
§ A BRIEF INTRODUCTION TO AUTOENCODERS
AEs were first introduced by <cit.> as neural networks trained to reproduce their inputs.
Unlike supervised learning, which relies on input-output pairs with explicit labels, training AEs is considered unsupervised learning because it does not require labeled data to be trained. Their objective is to learn an identity mapping 𝒜.
In general, 𝒜 can be identified using a neural network composed of two sub-networks, an encoder and a decoder.
The encoder function g_θ(𝐱):ℝ^N →ℝ^p maps the input 𝐱 into a lower dimensional representation 𝐳, referred to as the latent vector, such that p ≪ N.
The latent vector 𝐳 is then passed to the decoder function h_ϕ(𝐳):ℝ^p →ℝ^N which generates an approximated solution 𝐱̂ to the input.
The indices θ and ϕ refer to the training parameters of the encoder and decoder networks, respectively.
Training an AE consists of finding the optimal training parameters θ and ϕ that minimize a loss function ℒ(𝐱,𝐱̂) between the original and reconstructed inputs,i.e.,
θ_opt, ϕ_opt = argmin_θ, ϕℒ(𝐱,h_ϕ(g_θ(𝐱))).
By capturing the most important features and patterns of the input, the AE compresses the input into a condensed representation through its encoder function. This compression technique involves reducing the dimensionality of the input while maintaining critical information.
In this sense, AEs can be considered as a nonlinear generalization of POD.
The AE architecture can be varied depending on the data type and its input format.
Several types of AEs have been developed <cit.>, each tailored to address specific problems or exploit particular data features.
In this paper, we will provide an overview of those relevant to our work.
§.§ Fully-connected autoencoder
This variant of AE uses fully-connected layers to find a compressed representation 𝐳 for the input 𝐱 (see Fig. <ref>).
Every fully connected layer performs the following operation:
𝐲 = 𝐟(𝐖𝐱 + 𝐛),
where 𝐲∈ℝ^l is the output of the layer (input of the following layer in a deep network), 𝐖∈ℝ^l × N is a weight matrix and 𝐛∈ℝ^l is the bias vector.
The nonlinear activation function 𝐟:ℝ^l →ℝ^l is applied element-wise to the output of the affine transformation.
This variation of AE is not suitable for handling high-resolution 2D input images due to its requirement of the flattening the input into a 1D vector.
In addition, flattening the image results in disregarding the spatial structure, i.e., the spatial connections between neighboring pixels are not explicitly maintained.
An alternative would be to use convolutional layers to preserve the spatial information.
§.§ Convolutional autoencoder
In this variation of AE, the fully connected layers are replaced by convolutional layers.
They are frequently used in computer vision tasks because they capture spatial dependencies effectively (see, e.g., <cit.>).
As the name suggests, a convolutional layer applies a convolution operation to its input.
Convolution is the process of applying a kernel or filter to the input data and computing the element-wise multiplications between the filter entries and the corresponding input values.
The results are then summed.
Mathematically, the convolutional operation is expressed in the same way presented in (<ref>) with 𝐖 being a circular matrix.
An illustration of a 2D convolutional operation is shown in Fig. <ref>.
It should be noted that the underlying convolutional principle is the same for 1D and 3D cases.
§.§ Temporal autoencoder
Temporal AEs are used in many applications, e.g., video prediction and motion forecasting, where temporal coherence needs to be learned from sequential data (see, e.g., <cit.>).
The working principle of a temporal AE is slightly different from that of the conventional one described above. In a temporal AE, the focus is on the prediction of the temporal evolution of the input data,i.e.,
𝐱(t+1) = 𝒯(𝐱(t)),
where 𝐱(t) and 𝐱(t+1) denote the input variable at the current and next timesteps, respectively, and 𝒯 is the neural network that encloses the encoder and decoder functions.
Temporal AEs require supervised learning because the temporal dependency is learned by providing the ground truth sequence during training.
Both fully-connected and convolutional layers can be used to build the architecture of the temporal AE.
The terminology autoencoder is used here to describe the underlying structure of an encoder-decoder model.
Its use, however, departs from the conventional approach defined earlier, where the encoder compresses the input data, and the decoder reconstructs it at the output layer.
Instead, in a temporal AE, the encoder learns temporal relationships from a sequence, and the decoder reconstructs the data which serves as input at the following timestep (see Fig. <ref>).
§ DATA-DRIVEN REDUCED MODEL
We extend the framework introduced by <cit.> for constructing a neural network-based reduced model for fluid simulations by making it parametric, i.e., the dynamics are predicted for new values of the parameters that were not used during training.
The reduced model is represented as a 3D temporal convolutional AE.
§.§ Model architecture
We define the snapshot 𝐬(t_m) ∈ℝ^N_x × N_y × N_v as a multichannel image of the flow field solution at time step t_m.
Each of the N_v channels represents the solution of a flow field variable (velocity magnitude or temperature in our case) defined on a uniform 2D Cartesian grid of dimension N_x × N_y.
Dynamic flow field simulations can be analyzed by temporal sequence of solution snapshots. Given the previous h snapshots and the corresponding simulation parameter value μ, the reduced model predicts the flow field solution at the next time step, i.e.,
𝐬(t_h+1) ≈𝒩(μ,𝐬(t_0),⋯,𝐬(t_h)),
where 𝒩 is a nonlinear function that results from training a 3D temporal convolutional AE.
As an extension of the 2D convolutional layers illustrated in Sec. <ref>, the 3D convolutional layer used here not only operate on the spatial but also on the temporal dimension.
Therefore, they are well suited for applications where both the spatial and temporal dimensions of the data are significant, such as video classification, action recognition, and spatio-temporal segmentation.
The parametric data-driven reduced model framework and the model architecture are presented in Fig. <ref>.
The 3D temporal convolutional AE used in this work employs an encoding-decoding path to learn the underlying dynamics of the spatio-temporal data.
The encoding path consists of four 3D convolutional layers with a tanh activation function.
A connection of non-trainable layers precedes the encoding path of our model to combine the different inputs to the model, i.e., the input sequence and the corresponding simulation parameter μ, which is the temperature of the solid particles in our simulation.
It should be noted that μ is given to the network as an image of the initial condition of the flow.
The image format of the parameter value facilitates the combination of μ with the other inputs, as they are also presented in image form.
Once the temporal sequence has been analyzed in the encoding path, the neural network predicts the solution at the next time step by reconstructing the flow field image with the original size in the decoding path.
For this purpose, four 2D convolutional transpose layers with tanh activation are used.
The decoding path is followed by a non-trainable post-processing layer that effectively utilizes the physical information embedded within the parameter image.
Specifically, we use the location of particles to enforce a no-slip condition to guide the prediction.
Incorporating these constraints will improve the prediction accuracy of our framework.
In order to improve the efficiency of the training, we use a batch normalization layer after every intermediate convolutional operation. This technique helps to speed up convergence and reduce overfitting <cit.>.
Tab. <ref> shows the detailed architecture of the model. In this architecture, all the convolutional layers operate with a stride of 2 in all image dimensions to systematically decrease/increase the output size. The filter size used for the 3D convolutional layers is 2× 4 × 4, while the 2D convolutional transpose layers use a filter of size 4 × 4.
§.§ Input and output datasets
The training data used in this study comprises high-dimensional simulation snapshots 𝐬(t_m) and their corresponding simulation parameter μ. The high-dimensional snapshots are obtained from solving the numerical model described in Sec. <ref>, and to make them compatible with the input layer of the neural network, they are first interpolated on a uniform Cartesian grid of dimension 256 × 128.
The interpolated values are considered ground truth and will be used to train and evaluate our model.
Subsequently, the interpolated snapshots are normalized between -1 and 1 for numerical stability during the training process according to
χ_norm = 2·χ - χ_min/χ_max -χ_min - 1,
where χ is the corresponding pixel value of the image and χ_min and χ_max are the minimum and maximum pixel values, respectively, identified for all snapshots for all training simulations parameterized with μ.
It is important to note that each image channel, i.e., variable field, is scaled separately. The parameter images are also normalized as they are considered another input to the neural network.
For the purpose of training, the snapshots are structured sequentially:
X(μ) = [ 𝐬(t_0) 𝐬(t_1) ⋯ 𝐬(t_h); 𝐬(t_1) 𝐬(t_2) ⋯ 𝐬(t_h+1); ⋮ ⋮ ⋯ ⋮; 𝐬(t_N_t - h -1) 𝐬(t_N_t - h) ⋯ 𝐬(t_N_t - 1) ],
Y(μ) = [ 𝐬(t_h+1); 𝐬(t_h+2); ⋮; 𝐬(t_N_t) ],
where N_t is the total number of simulation snapshots, X is the sequence of images given as input and Y is the corresponding output for a single simulation.
§.§ Offline training and online prediction
The supervised learning approach is used to approximate 𝒩, which consists of finding the optimal training parameters θ that minimizes a loss function ℒ(𝐬(t_m),𝐬̂(t_m)) between the ground truth 𝐬(t_m) and the predicted solution 𝐬̂(t_m))
θ_opt = argmin_θℒ(𝐬(t_m),𝐬̂(t_m)).
The optimization problem shown in (<ref>) can be solved using the gradient descent algorithm or one of its variants, such as Adam optimizer <cit.>. Backpropagation <cit.> is used to compute the partial derivatives required to update the trainable parameters θ. The optimization process continues until convergence or until a certain stopping criterion is met. The training procedure is summarized in Algorithm <ref>.
Once the neural network has been trained, it can be used to predict the flow field solution for a new value of μ that was not used during training.
The prediction process starts by providing the first h snapshots as an input to the model.
In this work, we use the ground truth snapshots to make our initial predictions. We emphasize that the prediction is recursive (see Fig. <ref>). The online step is concluded with an unscaling process to revert the snapshots to their original scale. Algorithm <ref> summarizes the prediction of the flow field solution given a new value of the simulation parameter.
§ RESULTS
We present a comprehensive evaluation of the performance of the parametric 3D temporal convolutional AE in this section.
Our primary objective is to develop a data-driven framework for the high-dimensional dynamic system presented in Sec. <ref>.
The neural network consists of the 3D temporal convolutional AE explained in Sec. <ref> that predicts the flow field solution at the next timestep given the previous h=10 snapshots and the corresponding value of μ.
The length h of the input sequence is a hyperparameter of the model and can be adjusted if needed.
The training and validation datasets were obtained by running the high-fidelity model presented in Sec. <ref>. Each simulation is parameterized by the temperature of the particles T^par.
The neural network is trained using Algorithm <ref> for the particle temperature parameters T^par, tr = 300,400,500,600,700,800,900,1000,1100,1200K.
A total number of 4000 snapshots (400 for every training simulation) were collected with a uniform timestep of 6.2× 10^-4 s.
The Adam optimizer <cit.> with learning rate η = 0.001 and batch size N_b = 10 is used to find the optimal training parameters θ_opt.
The loss function used is the mean squared error, and the neural network is trained for 250 epochs using the open-source deep learning library TensorFlow <cit.>.
Once the model is trained, we use Algorithm <ref> to predict the flow field solution for a new value of μ.
Figs. <ref> and <ref> evaluate the parametric 3D convolutional autoencoder model for the validation parameter T^par, val = 750K.
The snapshots shown are chosen randomly and correspond to the solutions at t=0.0124, 0.0868, 0.1426 s for the temperature field (Fig. <ref>) and velocity field (Fig. <ref>).
It is evident that the first half of the domain, which contains the zone of interest, i.e., the flow in the void space between particles, was accurately predicted by our model with a very small visual difference.
For the sake of a more quantitative comparison, the variations of the temperature and velocity along Line 1 and Line 2 from Fig. <ref> are plotted in Figs. <ref> and <ref>.
These plots, generated at randomly chosen time steps, correspond to two validation simulations parameterized with T^par, val = 550K and T^par, val = 750K.
Results demonstrate the capability of the parametric 3D convolutional AE to predict the flow field solution in packed bed configuration.
However, the solution deviates slightly from the original solution in the downstream domain where the turbulence exists.
One of the possible reasons for this deviation could be the limited amount of training data used.
Moreover, due to the recursive prediction of the model and the accumulation of errors, accurately predicting the flow field solution in highly turbulent and chaotic regions become challenging.
However, we emphasize that the model still captures the main flow pattern of vortex shedding, which is the dominant flow behavior in this region.
In addition, the downstream domain is, in many cases, less relevant for practical applications.
Consequently, while accurate prediction in the downstream is desirable, it may not always be necessary. Instead, we can highlight the model's ability to accurately predict the flow field solution in the void space between particles in a bed configuration.
Figs. <ref> to <ref> show that our reduced model can predict the flow field solution for new simulation parameters well and that the dominant prediction errors are mainly located in the highly turbulent downstream region. In contrast, our model accurately predicts the flow field in the void space between particles.
While a precise comparison of the computational time required to evaluate the high dimensional model and the reduced model is not possible due to the fine grid (8000 × 500) used for the numerical solution, we directly compare the average computational time required to solve the high fidelity model on six processors (≈ 27365 s) to that of the single thread evaluation of the reduced model (≈ 63 s).
It is evident that a significant reduction in the computational time is obtained.
§ CONCLUSION
In this work, a reduced model for predicting flow fields in a bed configuration of hot spheres has been developed.
The model is data-driven and does not require access to the governing equations of the high-fidelity model.
Our work builds upon the previous work done by <cit.>, who uses a 3D temporal convolutional autoencoder to learn a temporal sequence of 2D flow field snapshots to predict the temporal evaluation.
In our study, we have extended this framework by making it parametric, allowing the model to predict the solution for a new simulation parameter.
Therefore, our reduced model learns reduced order embeddings of high-dimensional dynamic systems and substantially reduces the computational time required to approximate the solution for a new simulation parameter.
Moreover, the framework presented here suggests using a post-processing non-trainable output layer to incorporate the available physical knowledge of the system into the prediction.
This is helpful to improve the overall quality of the results because the output of the model is stacked with the inputs at the next time step.
In order to evaluate our model, we compared the original results of a simulation parameterized with a novel parameter, which was not used during the training of the model, with the predicted solution using the parametric 3D temporal convolution autoencoder reduced model.
We emphasize that the prediction is made recursively and does not require supervision from the ground truth.
Significant CPU time and memory reduction have been obtained since the reduced model does not demand high dimensional discretization of the numerical domain to produce an accurate solution.
Results also showed that the model was able to accurately predict the flow field in the void region between particles, which is useful for many practical applications, including heat transfer analysis.
However, the performance of the reduced model in the highly turbulent and chaotic downstream region is limited.
Despite this limitation, the overall vortex-shedding pattern in this region is captured.
Even though the application of the presented framework to a densely packed bed configuration remains to be explored, this study shows the potential of using deep learning to build reduced models for the prediction of flow fields in the void space in a packed bed configuration.
In addition, the framework used in this study has potential applications in various engineering domains, such as heat exchangers.
Future work will focus on improving the prediction accuracy in the downstream region, considering highly packed beds and extending the framework to include more simulation parameters.
§ ACKNOWLEDGMENTS
Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 422037413 – TRR 287.
plainnat
|
http://arxiv.org/abs/2307.02193v1
|
20230705104121
|
Directed Poincaré Inequalities and $L^1$ Monotonicity Testing of Lipschitz Functions
|
[
"Renato Ferreira Pinto Jr"
] |
cs.DS
|
[
"cs.DS"
] |
anonymous
Directed Poincaré Inequalities and L^1 Monotonicity Testing of Lipschitz Functions
Renato Ferreira Pinto Jr.Partly funded by an NSERC Canada Graduate Scholarship Doctoral
Award.
University of Waterloo
=====================================================================================================================================
We study the connection between directed isoperimetric inequalities and monotonicity testing. In
recent years, this connection has unlocked breakthroughs for testing monotonicity of functions
defined on discrete domains. Inspired the rich history of isoperimetric inequalities in
continuous settings, we propose that studying the relationship between directed isoperimetry and
monotonicity in such settings is essential for understanding the full scope of this connection.
Hence, we ask whether directed isoperimetric inequalities hold for functions f : [0,1]^n →, and whether this question has implications for monotonicity testing. We answer both
questions affirmatively. For Lipschitz functions f : [0,1]^n →, we show the inequality
d^_1(f) ≲^- f_1, which upper bounds the L^1 distance to
monotonicity of f by a measure of its “directed gradient”. A key ingredient in our proof is
the monotone rearrangement of f, which generalizes the classical “sorting operator”
to continuous settings. We use this inequality to give an L^1 monotonicity tester for
Lipschitz functions f : [0,1]^n →, and this framework also implies similar results for
testing real-valued functions on the hypergrid.
empty
§ INTRODUCTION
In property testing, algorithms must make a decision about whether a function f : Ω→ R
has some property , or is far (under some distance metric) from having that property,
using a small number of queries to f. One of the most well-studied problems in property testing is
monotonicity testing, the hallmark case being that of testing monotonicity of Boolean
functions on the Boolean cube, f : ^n →. We call f monotone if f(x) ≤ f(y) whenever
x ≼ y, x_i ≤ y_i for every i ∈ [n].
A striking trend emerging from this topic of research has been the connection between monotonicity
testing and isoperimetric inequalities, in particular directed analogues of classical results
such as Poincaré and Talagrand inequalities. We preview that the focus of this work is to further
explore this connection by establishing directed isoperimetric inequalities for functions f :
[0,1]^n → with continuous domain and range, and as an application obtain monotonicity testers
in such settings. Before explaining our results, let us briefly summarize the connection between
monotonicity testing and directed isoperimetry.
For a function f : ^n →, let d^_1(f) denote its L^1 distance to any constant
function g : ^n →, and for any point x, define its discrete gradient f(x) ∈^n by ( f(x))_i f(x^i → 1) - f(x^i → 0) for each i ∈ [n], where x^i
→ b denotes the point x with its i-th coordinate set to b. Then the following
inequality[The left-hand side is usually written f instead; for Boolean functions,
the two quantities are equivalent up to a constant factor, and writing d^_1(f) is more
consistent with the rest of our presentation.] is usually called the Poincaré inequality on the
Boolean cube (see <cit.>): for every f : ^n →,
d^_1(f) ≲ f_1 .
(Here and going forward, we write f ≲ g to denote that f ≤ c g for some universal
constant c, and similarly for f ≳ g. We write f ≈ g to denote that f ≲ g
and g ≲ f.)
Now, let d^_1(f) denote the L^1 distance from f to any monotone function g : ^n →, and for each point x let ^- f(x), which we call the directed gradient of f,
be given by ^- f(x) min{ f(x), 0 }. Then <cit.> were the first to
notice that the main ingredient of the work of <cit.>, who gave a monotonicity tester for
Boolean functions on the Boolean cube with query complexity O(n/ϵ), was the following
“directed analogue” of (<ref>)[Typically the left-hand side
would be the distance to a Boolean monotone function, rather than any real-valued monotone
function, but the two quantities are equal; this may be seen via a maximum matching of violating
pairs of f, see <cit.>.]: for every
f : ^n →,
d^_1(f) ≲^- f_1 .
The tester of <cit.> is the “edge tester”, which samples edges of the Boolean cube
uniformly at random and rejects if any sampled edge violates monotonicity. Inequality
(<ref>) shows that, if f is far from monotone, then many edges are
violating, so the tester stands good chance of finding one.
In their breakthrough work, <cit.> gave the first monotonicity tester with o(n) query
complexity by showing a directed analogue of Margulis's inequality. This was improved by
<cit.>, and eventually the seminal paper of <cit.> resolved the problem of (nonadaptive)
monotonicity testing of Boolean functions on the Boolean cube, up to polylogarithmic factors, by
giving a tester with query complexity O(√(n) / ϵ^2). The key ingredient was
to show a directed analogue of Talagrand's inequality. Talagrand's inequality
gives that, for every f : ^n →,
d^_1(f) ≲ f_2 .
Compared to (<ref>), this replaces the ℓ^1-norm of the gradient with
its ℓ^2-norm. <cit.> showed the natural directed analogue[In fact, they require
a robust version of this inequality, but we omit that discussion for simplicity.] up to
polylogarithmic factors, which were later removed by <cit.>: for every f : ^n →,
d^_1(f) ≲^- f_2 .
Since then, directed isoperimetric inequalities have also unlocked results in monotonicity testing
of Boolean functions on the hypergrid <cit.> (see also <cit.>)
and real-valued functions on the Boolean cube <cit.>.
Our discussion so far has focused on isoperimetric (Poincaré-type) inequalities on
discrete domains. On the other hand, a rich history in geometry and functional analysis,
originated in continuous settings, has established an array of isoperimetric inequalities for
functions defined on continuous domains, as well as an impressive range of connections to topics
such as partial differential equations <cit.>, Markov diffusion processes <cit.>,
probability theory and concentration of measure <cit.>, optimal transport <cit.>,
polynomial approximation <cit.>, among others. (See <ref> for a
brief background on Poincaré-type inequalities.)
As a motivating starting point, we note that for suitably smooth (Lipschitz) functions f : [0,1]^n
→, an L^1 Poincaré-type inequality holds <cit.>:
d^_1(f) ≲ f_2 .
Thus, understanding the full scope of the connection between classical isoperimetric inequalities,
their directed counterparts, and monotonicity seems to suggest the study of the continuous setting.
In this work, we ask: do directed Poincaré-type inequalities hold for functions f with
continuous domain and range? And if so, do such inequalities have any implications for monotonicity
testing? We answer both questions affirmatively: Lipschitz functions f : [0,1]^n →
admit a directed L^1 Poincaré-type inequality (<ref>),
and this inequality implies an upper bound on the query complexity of testing monotonicity of such
functions with respect to the L^1 distance (<ref>). (We view L^1 as the natural
distance metric for the continuous setting; see <ref> for a discussion.) This
framework also yields results for L^1 testing monotonicity of real-valued functions on the
hypergrid f : [m]^n →. Our testers are partial derivative testers, which naturally
generalize the classical edge testers <cit.> to continuous domains.
We now introduce our model, and then summarize our results.
§.§ L^p-testing
Let (Ω, Σ, μ) be a probability space (typically for us, the unit cube or hypergrid
with associated uniform probability distribution). Let R ⊆ be a range, and a
property of functions g : Ω→ R. Given a function f : Ω→, we denote the L^p
distance of f to property by d_p(f, ) inf_g ∈ d_p(f,g), where
d_p(f,g) x ∼μ*f(x)-g(x)^p^1/p. For fixed domain Ω, we write
d^_p(f) for the L^p distance of f to the property of constant functions, and
d^_p(f) for the L^p distance of f to the property of monotone functions. (See
<ref> for a formal definition contemplating the required measurability and
integrability assumptions.)
Let p ≥ 1. For probability space (Ω,Σ,μ), range R ⊆, property
⊆ L^p(Ω,μ) of functions g : Ω→ R, and proximity parameter
ϵ > 0, we say that randomized algorithm A is an L^p-tester for with
query complexity q if, given oracle access to an unknown input function f : Ω→
R ∈ L^p(Ω,μ), A makes at most q oracle queries and 1) accepts with probability at
least 2/3 if f ∈; 2) rejects with probability at least 2/3 if d_p(f, ) >
ϵ.
We say that A has one-sided error if it accepts functions f ∈ with probability 1,
otherwise we say it has two-sided error. It is nonadaptive if it decides all of its
queries in advance (before seeing output from the oracle), and otherwise it is adaptive.
We consider two types of oracle:
Value oracle: Given point x ∈Ω, this oracle outputs the value f(x).
Directional derivative oracle: Given point x ∈Ω and vector v ∈^n, this
oracle outputs the derivative of f along v at point x, given by
∂ f/∂ v(x) = v · f(x), as long as f is differentiable at
x. Otherwise, it outputs a special symbol .
A directional derivative oracle is weaker than a full first-order oracle, which would return the
entire gradient <cit.>, and it seems to us like a reasonable model for the
high-dimensional setting; for example, obtaining the full gradient costs n queries, rather than a
single query. This type of oracle has also been studied in optimization research, see
<cit.>. For our applications, only the sign of the result will matter, in which case we
remark that, for sufficiently smooth functions (say, functions with bounded second derivatives) each
directional derivative query may be simulated using two value queries on sufficiently close together
points.
Our definition (with value oracle) coincides with that of <cit.> when the range is R =
[0,1]. On the other hand, for general R, we keep the distance metric unmodified, whereas
<cit.> normalize it by the magnitude of R. Intuitively, we seek testers that are
efficient even when f may take large values as the dimension n grows; see
<ref> for more details.
§.§ Results and main ideas
§.§.§ Directed Poincaré-type inequalities
Our first result is a directed Poincaré inequality for Lipschitz functions f : [0,1]^n →,
which may be seen as the continuous analogue of inequality (<ref>) of
<cit.>.
Let f : [0,1]^n → be a Lipschitz function with monotone rearrangement f^*. Then
d^_1(f) ≈*f - f^*≲^- f_1 .
As hinted in the statement, a crucial tool for this result is the monotone rearrangement
f^* of f. We construct f^* by a sequence of axis-aligned rearrangements R_1, …, R_n;
each R_i is the non-symmetric monotone rearrangement operator along dimension i, which
naturally generalizes the sorting operator of <cit.> to the continuous case. For each
coordinate i ∈ [n], the operator R_i takes f into an equimeasurable function R_i f that is
monotone in the i-th coordinate, at a “cost” f - R_if that is upper bounded by
∂^-_i f, where ∂^-_i f (^- f)_i is the directed partial
derivative along the i-th coordinate. We show that each application R_i can only decrease the
“cost” associated with further applications R_j, so that the total cost of obtaining f^* (the LHS of (<ref>)) may be upper bounded, via the triangle inequality,
by the sum of all directed partial derivatives, the RHS of (<ref>).
A technically simpler version of this argument also yields a directed Poincaré inequality for
real-valued functions on the hypergrid. We also note that
<ref> are both tight
up to constant factors.
Let f : [m]^n → and let f^* be its monotone rearrangement. Then
d^_1(f) ≈*f - f^*≲ m ^- f_1 .
<ref> places our results in the context of existing classical and directed
inequalities. In that table and going forward, for any p,q ≥ 1 we call the inequalities
d^_p(f)^p ≲ f_q^p and
d^_p(f)^p ≲^- f_q^p
a classical and directed (L^p, ℓ^q)-Poincaré inequality, respectively. Note that
the L^p notation refers to the space in which we take norms, while ℓ^q refers to the geometry
in which we measure gradients. In this paper, we focus on the L^1 inequalities. See also
<ref> for an extended version of <ref> including
other related hypergrid inequalities shown in recent work.
We also note that we have ignored in our discussion the issues of robust inequalities, which
seem essential for some of the testing applications (see <cit.>), and the distinction between
inner and outer boundary, whereby some inequalities on Boolean f may be made
stronger by setting f(x)=0 when f(x)=0 (see <cit.>). We refer the reader to the
original works for the strongest version of each inequality and a detailed treatment of these
issues.
§.§.§ Testing monotonicity on the unit cube and hypergrid
Equipped with the results above, we give a monotonicity tester for Lipschitz functions f : [0,1]^n
→, and the same technique yields a tester for functions on the hypergrid as well. The testers
are parameterized by an upper bound L on the best Lipschitz constant of f in ℓ^1 geometry,
which we denote _1(f) (see <ref> for a formal definition).
Both of our testers are partial derivative testers. These are algorithms which only have
access to a directional derivative oracle and, moreover, their queries are promised to be
axis-aligned vectors. In the discrete case, these are usually called edge testers
<cit.>.
There is a nonadaptive partial derivative L^1 monotonicity tester for Lipschitz functions f :
[0,1]^n → satisfying _1(f) ≤ L with query complexity O(n
L/ϵ) and one-sided error.
Similarly, there is a nonadaptive partial derivative L^1 monotonicity tester for functions f
: [m]^n satisfying _1(f) ≤ L with query complexity O(nm
L/ϵ) and one-sided error.
The testers work by sampling points x and coordinates i ∈ [n] uniformly at random, and using
directional derivative queries to reject if ∂^-_i f(x) < 0. Their correctness is shown
using <ref>, which
imply that, when f is ϵ-far from monotone in L^1-distance, the total magnitude of its
negative partial derivatives must be large—and since each partial derivative is at most L by
assumption, the values ∂^-_i f(x) must be strictly negative in a set of large measure,
which the tester stands good chance of hitting with the given query complexity.
§.§.§ Testing monotonicity on the line
The results above, linking a Poincaré-type inequality with a monotonicity tester that uses partial
derivative queries and has linear dependence on n, seem to suggest a close parallel with the case
of the edge tester on the Boolean cube <cit.>. On the other hand, we also show a strong
separation between Hamming and L^1 testing. Focusing on the simpler problem of monotonicity
testing on the line, we show that the tight query complexity of L^1 monotonicity testing
Lipschitz functions grows with the square root of the size of the (continuous or discrete) domain:
There exist nonadaptive L^1 monotonicity testers for Lipschitz functions f : [0,m] →
and f : [m] → satisfying _1(f) ≤ L with query complexity O(√(mL/ϵ)). The testers use value queries and have one-sided error.
This result (along with the near-tight lower bounds in <ref>) is in contrast
with the case of Hamming testing functions f : [m] →, which has sample complexity
Θ(log m) <cit.>. Intuitively, this difference arises because a
Lipschitz function may violate monotonicity with rate of change L, so the area under the curve may
grow quadratically on violating regions. The proof is in fact a reduction to the Hamming case, using
the Lipschitz assumption to establish a connection between the L^1 and Hamming distances to
monotonicity.
§.§.§ Lower bounds
We give two types of lower bounds: under no assumptions about the tester and for constant n, we
show that the dependence of <ref> on L/ϵ is close to optimal[Note
that one may always multiply the input values by 1/L to reduce the problem to the case with
Lipschitz constant 1 and proximity parameter ϵ/L, so this is the right ratio to look
at.]. We give stronger bounds for the special case of partial derivative testers (such as the ones
from <ref>), essentially showing that our analysis of the partial derivative tester
is tight.
Let n be a constant. Any L^1 monotonicity tester (with two-sided error, and adaptive value
and directional derivative queries) for Lipschitz functions f : [0,1]^n → satisfying
_1(f) ≤ L requires at least Ω((L/ϵ)^n/n+1) queries.
Similarly, any L^1 monotonicity tester (with two-sided error and adaptive queries) for
functions f : [m]^n → satisfying _1(f) ≤ L requires at least Ω(
min{ (mL/ϵ)^n/n+1, m^n }) queries.
Notice that the bounds above cannot be improved beyond logarithmic factors, due to the upper bounds
for the line in <ref>. It also follows that adaptivity (essentially) does not help
with L^1 monotonicity testing on the line, matching the situation for Hamming testing
<cit.>.
<ref> is obtained via a “hole” construction, which hides a
non-monotone region of f inside an ℓ^1-ball B of radius r. We choose r such the
violations of monotonicity inside B are large enough to make f ϵ-far from monotone, but
at the same time, the ball B is hard to find using few queries. However, this construction has
poor dependence on n.
To lower bound the query complexity of partial derivative testers with better dependence on n, we
employ a simpler “step” construction, which essentially chooses a coordinate i and hides a small
negative-slope region on every line along coordinate i. These functions are far from monotone, but
a partial derivative tester must correctly guess both i and the negative-slope region to detect
them. We conclude that <ref> is optimal for partial derivative testers on the unit
cube, and optimal for edge testers on the hypergrid for constant ϵ and L:
Any partial derivative L^1 monotonicity tester for Lipschitz functions f : [0,1]^n →
satisfying _1(f) ≤ L (with two-sided error and adaptive queries) requires at least
Ω(nL/ϵ) queries.
For sufficiently small constant ϵ and constant L, any partial derivative L^1
monotonicity tester for functions f : [m]^n → satisfying _1(f) ≤ L (with
two-sided error and adaptive queries) requires at least Ω(nm) queries.
<ref> summarizes our upper and lower bounds for testing monotonicity on the unit cube
and hypergrid, along with the analogous Hamming testing results for intuition and bounds for L^1
testing from prior works. See
<ref> for a discussion and
details of how prior works imply the results in that table, since to our knowledge the problem of
L^1 monotonicity testing parameterized by the Lipschitz constant has not been explicitly studied
before. See also <ref> for a broader overview of prior works on a spectrum of
monotonicity testing models.
§.§ Discussion and open questions
§.§.§ Stronger directed Poincaré inequalities?
Classical Poincaré inequalities are usually of the ℓ^2 form, which seems natural due to
basis independence. On the other hand, in the directed setting, the weaker ℓ^1 inequalities (as
in <cit.> and
<ref>) have more
straightforward proofs than ℓ^2 counterparts such as <cit.>. A perhaps related
observation is that monotonicity is not a basis-independent concept, since it is defined in
terms of the standard basis. It is not obvious whether directed ℓ^2 inequalities ought to hold
in every (real-valued, continuous) setting. Nevertheless, in light of the parallels and context
established thus far, we are hopeful that such an equality does hold. Otherwise, we believe that the
reason should be illuminating. For now, we conjecture:
For every Lipschitz function f : [0,1]^n →, it holds that
d^_1(f) ≲^- f_2 .
Accordingly, we also ask whether an L^1 tester with O(√(n)) complexity exists, presumably
with a dependence on the _2(f) constant rather than _1(f) since ℓ^2 is the relevant
geometry above.
§.§.§ Query complexity bounds
Our lower bounds either have weak dependence on n, or only apply to a specific family of
algorithms (partial derivative testers). Previous works have established tester-independent lower
bounds with strong dependence on n by using reductions from communication complexity
<cit.>, whose translation to the continuous setting is not obvious[Note that
there is no obvious reduction from testing on the hypergrid to testing on the unit cube—one idea
is to simulate the unit cube tester on a multilinear interpolation of the function defined on the
hypergrid, but the challenge is that simulating each query to the unit cube naively requires an
exponential number of queries to the hypergrid.], by reduction to comparison-based testers
<cit.>, whose connection to L^1 testing setting seems less immediate, or directly via a
careful construction <cit.>. We believe that finding strong tester-independent lower bounds
for L^1 testing Lipschitz functions on the unit cube is an interesting direction for further
study.
We also remark that even a tight lower bound matching <ref> may not rule out
testers with better dependence on n if, for example, such a tester were parameterized by
_2(f), which can be a factor of √(n) larger than _1(f). We view the possibility of
better testers on the unit cube, or otherwise a conceptual separation with <cit.>, as an
exciting direction for future work.
§.§.§ Relation to prior work on L^p-testing
<cit.> initiated the systematic study of L^p-testing and, most relevant to the present
work, established the first (and, to our knowledge, only) results on L^p testing of the
monotonicity property, on the hypergrid and on the discrete line. While our models are broadly
compatible, a subtle but crucial distinction must be explained.
<cit.> focused their exposition on the case of functions f : Ω→ [0,1], and in this
regime, L^1 testing can only be easier than Hamming testing, which they show via a reduction based
on Boolean threshold functions. On the other hand, for functions with other ranges, say f : Ω→ [a, b], their definition normalizes the notion of distance by a factor of 1/b-a. In
our terminology, letting r b-a and g f/r, it follows that d_1(g) = d_1(f)/r,
so testing f with proximity parameter ϵ reduces to testing g with proximity parameter
ϵ/r. For Hamming testers with query complexity that depends linearly on 1/ϵ, this
amounts to paying a factor of r in the reduction to the Boolean case[This factor can also
be tracked explicitly in the characterization of the L^1 distance to monotonicity of
<cit.>: it arises in Lemmas 2.1 and 2.2, where an integral from 0 to 1 must be changed to
an integral from a to b, so the best threshold function is only guaranteed to be
ϵ/r-far from monotone.]. This loss is indeed necessary, because by the same reasoning,
testing g with proximity parameter ϵ reduces to testing f with proximity parameter r
ϵ. Therefore the problems of testing f with proximity parameter ϵ and testing
f/r with proximity parameter ϵ/r have the same query complexity.
In this work, we do not normalize the distance metric by r; we would like to handle functions f
that may take large values as the dimension n grows, as long as f satisfies a Lipschitz
assumption, and our goal is to beat the query complexity afforded by the reduction to the Boolean
case. We derive these benchmarks by assuming that the input f is Lipschitz, and inferring an upper
bound on r based on the Lipschitz constant and the size of the domain. Combined with the hypergrid
tester of <cit.> and a discretization argument for the unit cube inspired by
<cit.>, we establish benchmarks for our testing problem. See <ref> for
details.
With the discussion above in mind, it is instructive to return to <ref>. We note that
our upper bounds have polynomially smaller dependence on n than the benchmarks, suggesting that
our use of the Lipschitz assumption—via the directed Poincaré inequalities in
<ref>—exploits
useful structure underlying the monotonicity testing problem (whereas the benchmark testers must
work for every function with bounded range, not only the Lipschitz ones). Our lower bounds introduce
an almost-linear dependence on the hypergrid length m; intuitively, this dependence is not implied
by the previous bounds in <cit.> because those construct the violations of
monotonicity via Boolean functions, whereas our constructions exploit the fact that a Lipschitz
function can “keep growing” along a given direction, which exacerbates the L^1 distance to
monotonicity in the region where that happens. Our lower bounds for partial derivative testers show
that the analysis of our algorithms is essentially tight, so new (upper or lower bound) ideas are
required to establish the optimal query complexity for arbitrary testers.
*On the choice of L^1 distance and Lipschitz assumption. We briefly motivate our
choice of distance metric and Lipschitz assumption. For continuous range and domain, well-known
counterexamples rule out testing with respect to Hamming distance: given any tester with finite
query complexity, a monotone function may be made far from monotone by arbitrarily small, hard to
detect perturbations. Testing against L^1 distance is then a natural choice, since this metric
takes into account the magnitude of the change required to make a function monotone (<cit.>
also discuss connections with learning and approximation theory). However, an arbitrarily small
region of the input may still have disproportionate effect on the L^1 distance if the function is
arbitrary, so again testing is infeasible. Lipschitz continuity seems like a natural enough
assumption which, combined with the choice of L^1 distance, makes the problem tractable. Another
benefit is that Lipschitz functions are differentiable almost everywhere by Rademacher's theorem, so
the gradient is well-defined almost everywhere, which enables the connection with Poincaré-type
inequalities.
*Organization. <ref> introduces definitions and conventions that will be
used throughout the paper. In <ref> we prove our directed Poincaré inequalities on
the unit cube and hypergrid, and in <ref> we give our L^1 monotonicity testers for
these domains. <ref> gives the upper bound for testing functions on the line, and in
<ref> we prove our lower bounds. Finally, in <ref> we give a
broader overview of prior works on monotonicity testing for the reader's convenience.
§ PRELIMINARIES
In this paper, denotes the set of strictly positive integers {1, 2, …}. For m ∈, we write [m] to denote the set {i ∈ : i ≤ m}. For any c ∈, we write
c^+ for max{0, c} and c^- for -min{0, c}. We denote the closure of an open set B
⊂^n by B.
For a (continuous or discrete) measure space (Ω, Σ, ν) and measurable function f :
Ω→, we write ∫_Ω f ν for the Lebesgue integral of f over this
space. Then for p ≥ 1, the space L_p(Ω, ν) is the set of measurable functions f such
that f^p is Lebesgue integrable, ∫_Ωf^p ν < ∞, and we
write the L^p norm of such functions as fp = fpν = (∫_Ωf^p ν)^1/p. We will write ν to denote the Lebesgue measure when Ω⊂^n is a continuous domain (in which case we will simply write L^p(Ω) for
L^p(Ω, ν)) and the counting measure when Ω⊂^n is a
discrete domain, and reserve μ for the special case of probability measures.
§.§ Lipschitz functions and L^p distance
We first define Lipschitz functions with respect to a choice of ℓ^p geometry.
Let p ≥ 1. We say that f : Ω→ is (ℓ^p, L)-Lipschitz if,
for every x,y ∈Ω, f(x)-f(y)≤ L x-y_p. We say that f is Lipschitz if
it is (ℓ^p, L)-Lipschitz for any L (in which case this also holds for any other choice of
ℓ^q), and in this case we denote by _p(f) the best possible Lipschitz constant:
_p(f) inf_L {f is (ℓ^p, L)-Lipschitz} .
It follows that _p(f) ≤_q(f) for p ≤ q.
We now formally define L^p distances, completing the definition of L^p-testers from
<ref>.
Let p ≥ 1, let R ⊆, and let (Ω, Σ, μ) be a probability space.
For a property ⊆ L^p(Ω,μ) of functions g : Ω→ R and function f :
Ω→ R ∈ L^p(Ω,μ), we define the distance from f to as d_p(f, )
inf_g ∈ d_p(f,g), where
d_p(f,g) f-gpμ
= x ∼μf(x)-g(x)^p^1/p .
For p=0, we slightly abuse notation and, taking 0^0 = 0, write d_0(f,g) for the Hamming
distance between f and g weighted by μ (and may be any set of measurable functions
on (Ω,Σ,μ)).
In our applications, we will always take μ to be the uniform distribution over
Ω[More precisely: when Ω = [0,1]^n, μ will be the Lebesgue measure on
Ω (with associated σ-algebra Σ), and when Ω = [m]^n, μ will be the
uniform distribution over Ω (with the power set of Ω as the σ-algebra
Σ).]. As a shorthand, when (Ω,Σ,μ) is understood from the context and R =, we will write
* d^_p(f) d_p(f, ^) where
^{ f : Ω→∈ L^p(Ω,μ) : f = c, c ∈}; and
* d^_p(f) d_p(f, ^) where
^{ f : Ω→∈ L^p(Ω,μ) : f is monotone}.
Going forward, we will also use the shorthand d_p(f) d^_p(f).
§.§ Directed partial derivatives and gradients
We first consider functions on continuous domains. Let B be an open subset of ^n, and let f
: B → be Lipschitz. Then by Rademacher's theorem f is differentiable almost everywhere in
B. For each x ∈ B where f is differentiable, let f(x) = (∂_1 f(x), …,
∂_n f(x)) denote its gradient, where ∂_i f(x) is the partial derivative of f
along the i-th coordinate at x. Then, let ∂^-_i min{0, ∂_i}, for
every x where f is differentiable we have ∂^-_i f(x) = -( ∂_i f(x)
)^-. We call ∂^-_i the directed partial derivative operator in direction i.
Then we define the directed gradient operator by ^- (∂^-_1, …,
∂^-_n), again defined on every point x where f is differentiable.
Now considering the hypergrid domains, let f : [m]^n →. Fix x ∈ [m]^n and i ∈ [n],
and write e_i for the i-th basis vector, e_i takes value 1 in its i-th component and
0 elsewhere. We then define the (discrete) partial derivative of f along the i-th coordinate
at x by ∂_i f(x) f(x + e_i) - f(x) if x_i < m, and ∂_i f(x) 0
if x_i = m. We then define its discrete gradient by (∂_1, …,
∂_n). Their directed counterparts are defined as above: ∂^-_i min{0,
∂_i} and ^- (∂^-_1, …, ∂^-_n).
Note that this definition for the discrete discrete gradient on the hypergrid is slightly different
from how we introduced the discrete gradient on the Boolean cube in the opening (inequality
(<ref>)) and its use in <ref>, where we allowed each
edge (x, y) to “contribute” to both ∂_i f(x) and ∂_i f(y). In contrast, the
definition above (which we will use going forward) only allows the “contribution” to ∂_i
f(x), since on domain [m]^n with m=2, the point y falls under the case y_i = m, so
∂_i f(y) 0. The definition we choose seems more natural for the hypergrid settings,
but we also remark that for ℓ^1 inequalities, the choice does not matter up to constant factors
(each edge is counted once or twice). For ℓ^2 inequalities, this choice is related to the
issues of inner/outer boundaries and robust inequalities <cit.>.
§ DIRECTED POINCARÉ INEQUALITIES FOR LIPSCHITZ FUNCTIONS
In this section, we establish
<ref>. We start with
the one-dimensional case, functions on the line, and then generalize to higher dimensions. In
each subsection, we will focus our presentation on the setting where the domain is continuous
(corresponding to our results for the unit cube), and then show how the same proof strategy (more
easily) yields analogous results for discrete domains (corresponding to our results for the
hypergrid).
§.§ One-dimensional case
Let m > 0, let I (0,m), and let f : I→ be a measurable function. We
wish to show that f-f^*1≲ m ∂^- f1, where f^* is the monotone
rearrangement of f. We first introduce the monotone rearrangement, and then show this inequality
using an elementary calculus argument.
§.§.§ Monotone rearrangement
Here, we introduce the (non-symmetric, non-decreasing) monotone rearrangement of a one-dimensional
function. We follow the definition of <cit.>, with the slight modification that we
are interested in the non-decreasing rearrangement, whereas most of the literature usually
favours the non-increasing rearrangement. The difference is purely syntactic, and our choice more
conveniently matches the convention in the monotonicity testing literature. Up to this choice, our
definition also agrees with that of <cit.>, and we refer the reader to these two
texts for a comprehensive treatment.
We define the (lower) level sets of f : I→ as the sets
I_c { x ∈I : f(x) ≤ c }
for all c ∈. For nonempty measurable S ⊂ of finite measure,
the rearrangement of S is the set
S^* [ 0, ν(S) ]
(recall that ν stands for the Lebesgue measure here),
and we define ∅^* ∅. For a level set I_c, we write
I_c^* to mean (I_c)^*.
The monotone rearrangement of f is the function f^* : I→ given by
f^*(x) inf{ c ∈ : x ∈I_c^* } .
Note that f^* is always a non-decreasing function.
We note two well-known properties of the monotone rearrangement:
equimeasurability and order preservation. Two functions f, g are called equimeasurable if
ν{f ≥ c} = ν{g ≥ c} for every c ∈. A mapping u ↦ u^* is called
order preserving if f(x) ≤ g(x) for all x ∈I implies f^*(x) ≤ g^*(x)
for all x ∈I. See <cit.> for a proof of the
following:
Let f : I→ be a measurable function. Then f and f^* are equimeasurable.
The mapping f ↦ f^* is order preserving.
§.§.§ Absolutely continuous functions and the one-dimensional Poincaré inequality
Let f : I→ be absolutely continuous. It follows that f has a derivative
∂ f almost everywhere (outside a set of measure zero),
∂ f ∈ L^1(I) (its derivative is Lebesgue integrable), and
f(x) = f(0) + ∫_0^x ∂ f(t) t
for all x ∈I.
It also follows that ∂^- f ∈ L^1(I).
We may now show our one-dimensional inequality:
Let f : I→ be absolutely continuous. Then
f - f^*1≤ 2m ∂^- f1.
Let S { x ∈I : f^*(x) > f(x) },
and note that S is a measurable set because f, f^* are measurable functions
(the latter by <ref>).
Moreover, since f and f^* are equimeasurable (by the same result),
we have ∫ f ν = ∫ f^* ν and therefore
f-f^*1 = ∫_I *f-f^*ν
= ∫_S (f^*-f) ν + ∫_I ∖ S (f-f^*) ν
= ∫_S (f^*-f) ν +
( ∫_I (f-f^*) ν - ∫_S (f-f^*) ν)
= 2 ∫_S (f^*-f) ν .
Hence our goal is to show that
∫_S (f^*-f) ν≤ m ∂^- f1 .
Let x ∈I. We claim that there exists x' ∈ [0,x] such that f(x') ≥ f^*(x).
Suppose this is not the case. Then since f is continuous on [0,x], by the extreme value
theorem it attains its maximum and therefore there exists c < f^*(x) such that
f(y) ≤ c for all y ∈ [0,x]. Thus [0,x] ⊆I_c, so
ν(I_c) ≥ x and hence x ∈I_c^*. Then, by
<ref>, f^*(x) ≤ c < f^*(x), a contradiction. Thus the claim is
proved.
Now, let x ∈ S and fix some x' ∈ [0,x] such that f(x') ≥ f^*(x). Since f is
absolutely continuous, we have
f^*(x) - f(x)
≤ f(x') - f(x)
= -∫_x'^x ∂ f(t) t
≤ -∫_0^m ∂^- f(t) t
= ∂^- f1 .
The result follows by applying this estimate to all x:
∫_S (f^*-f) ν≤∫_S ∂^- f1ν
= ν(S) ∂^- f1≤ m ∂^- f1 .
§.§.§ Discrete case
Let m ∈ and let I [m]. We may define the monotone rearrangement f^* : I →
of f : I → as in <ref> by identifying I with I and
writing S^* [S] for each finite S ⊂. More directly, f^* is the
function such that f^*(1) ≤ f^*(2) ≤…≤ f^*(m) is the sorted sequence of the
values f(1), f(2), …, f(m). It is easy to show that the directed version of
<ref> holds, and in fact one may simply repeat the proof of that lemma.
Let f : [m] →. Then f-f^*1≤ 2m ∂^- f1.
§.§ Multidimensional case
In the continuous case, we ultimately only require an inequality on the unit cube [0,1]^n.
However, we will first work in slightly more generality and consider functions defined on a
box in ^n, defined below. This approach makes some of the steps more transparent, and
also gives intuition for the discrete case of the hypergrid.
Let a ∈^n_> 0. The box of size a is the closure B⊂^n
of B = (0, a_1) ×…× (0, a_n).
Going forward, B⊂^n will always denote such a box.
*Notation. For x ∈^n, y ∈ and i ∈ [n], we will use the notation
x^-i to denote the vector in ^[n] ∖{i} obtained by removing the i-th
coordinate from x (note that the indexing is not changed),
and we will write (x^-i, y) as a shorthand for the vector
(x_1, …, x_i-1, y, x_i+1, …, x_n) ∈^n. We will also write x^-i directly
to denote any vector in ^[n] ∖{i}.
For function f : B→ and x^-i∈^[n] ∖{i},
we will write f_x^-i for
the function given by f_x^-i(y) = f(x^-i, y) for all (x^-i, y) ∈B.
For any set D ∈^n, we will denote by D^-i the projection
{x^-i : x ∈ D}, and extend this notation in the natural way to more indices, D^-i-j.
Let f : B→ be a measurable function and let i ∈ [n].
The rearrangement of f in direction i is the function R_i f : B→
given by
(R_i f)_x^-i( f_x^-i)^*
for all x^-i∈(B)^-i.
We call each R_i the rearrangement operator in direction i.
We may put (<ref>) in words as follows: on each line in
direction i determined by point x^-i, the restriction of R_i f to that line is the
monotone rearrangement of the restriction of f to that line.
Let B be the box of size a ∈^n, and let f : B→ be Lipschitz
continuous. Then for each i ∈ [n],
f - R_i f1≤ 2 a_i ∂_i^- f1 .
Since f is Lipschitz continuous, each f_x^-i : [0,a_i] → is Lipschitz continuous
and a fortiori absolutely continuous. The result follows from
<ref>, using Tonelli's theorem to choose the order of integration.
A key ingredient in our multi-dimensional argument is that the rearrangement operator preserves
Lipschitz continuity:
If f : B→ is Lipschitz continuous (with Lipschitz constant L), then
R_i f is Lipschitz continuous (with Lipschitz constant 2L).
We are now ready to define the (multidimensional) monotone rearrangement f^*:
Let f : B→ be a measurable function. The monotone rearrangement
of f is the function
f^* R_n R_n-1… R_1 f .
We first show that f^* is indeed a monotone function:
Let f : B→ be Lipschitz continuous. Then f^* is monotone.
Say that g : B→ is monotone in direction i if g_x^-i is
non-decreasing for all x^-i∈(B)^-i. Then g is monotone if and
only if it is monotone in direction i for every i ∈ [n]. Note that R_i f is monotone in
direction i by definition of monotone rearrangement. Therefore, it suffices to prove that if
f is monotone in direction j, then R_i f is also monotone in direction j.
Suppose f is monotone in direction j, and
suppose i < j without loss of generality.
Let a ∈^n be the size of B.
Let x^-j∈(B)^-j and
0 ≤ y_1 < y_2 ≤ a_j, so that (x^-j, y_1), (x^-j, y_2) ∈B.
We need to show that (R_i f)(x^-j, y_1) ≤ (R_i f)(x^-j, y_2).
Let I_i [0, a_i].
For each k ∈{1,2}, let g_k : I_i → be given by
g_k(z)
f(x_1, …, x_i-1, z, x_i+1, …, x_j-1, y_k, x_j+1, …, x_n) .
Note that
g_k^*(z) =
(R_i f)(x_1, …, x_i-1, z, x_i+1, …, x_j-1, y_k, x_j+1, …, x_n)
for every z ∈I_i, and therefore our goal is to show that
g_1^*(x_i) ≤ g_2^*(x_i). But f being monotone in direction j means that g_1(z) ≤
g_2(z) for all z ∈I_i, so by the order preserving property
(<ref>) of the monotone rearrangement we get that
g_1^*(x_i) ≤ g_2^*(x_i), concluding the proof.
It is well-known that the monotone rearrangement is a non-expansive operator. Actually a stronger
fact holds, as we note below.
Let m > 0 and let f, g ∈ L^1[0,m]. Then f^*, g^* satisfy
∫_[0,m]( f^* - g^* )^- ν≤∫_[0,m]( f - g )^- ν
and
∫_[0,m]*f^* - g^*ν≤∫_[0,m]*f - gν .
The result above is stated for functions on the interval. Taking the integral over the box B and
repeating for each operator R_i yields the non-expansiveness of our monotone rearrangement
operator, as also noted by <cit.>:
Let f,g ∈ L^1(B). Then f^* - g^*1≤f - g1.
We show that the rearrangement operator can only make the norm of the directed partial derivatives
smaller, decrease the violations of monotonicity, which is the key step in this proof.
Let f : B→ be Lipschitz continuous and let i, j ∈ [n].
Then ∂^-_j (R_i f)1≤∂^-_j f1.
We may assume that i j, since otherwise the LHS is zero.
We will use the following convention for variables names:
w ∈^n will denote points in B;
z ∈^[n] ∖{i,j} will denote points in B^-i-j;
x ∈ will denote points in (0, a_i) (indexing the i-th dimension); and
y ∈ will denote points in (0, a_j) (indexing the j-th dimension).
For each i ∈ [n], let e_i denote the i-th basis vector.
Since f is Lipschitz, so is R_i f by <ref>. By Rademacher's
theorem, these functions are differentiable almost everywhere. Therefore, let
D ⊆ B be a measurable set such that f and R_i f are differentiable in D and
ν(D) = ν(B). We have
∂^-_j (R_i f)1 = ∫_D *∂^-_j (R_i f)ν
= ∫_D [
lim_h → 0( (R_i f)(w + h e_j) - (R_i f)(w)/h)^-
] ν(w)
(BC1)=lim_h → 0∫_D ( (R_i f)(w + h e_j) - (R_i f)(w)/h)^- ν(w)
(D1)=lim_h → 0∫_B ( (R_i f)(w + h e_j) - (R_i f)(w)/h)^- ν(w)
(T1)=lim_h → 0∫_B^-i-j∫_(0,a_j)∫_(0,a_i)( (R_i f)(z, y+h, x) - (R_i f)(z, y, x)/h)^-
ν(x) ν(y) ν(z)
≤lim_h → 0∫_B^-i-j∫_(0,a_j)∫_(0,a_i)( f(z, y+h, x) - f(z, y, x)/h)^-
ν(x) ν(y) ν(z)
(T2)=lim_h → 0∫_B ( f(w + h e_j) - f(w)/h)^- ν(w)
(D2)=lim_h → 0∫_D ( f(w + h e_j) - f(w)/h)^- ν(w)
(BC2)=∫_D [
lim_h → 0( f(w + h e_j) - f(w)/h)^-
] ν(w)
= ∫_D *∂^-_j fν
= ∂^-_j f1 .
Equalities (BC1) and (BC2) hold by the bounded convergence theorem, which applies because the
difference quotients are uniformly bounded by the Lipschitz constants of R_i f and f
(respectively), and because R_i f and f are differentiable in D (which gives pointwise
convergence of the limits).
Equalities (D1) and (D2) hold again by the uniform
boundedness of the difference quotients, along with the fact that ν(B ∖ D) = 0.
Equalities (T1) and (T2) hold by Tonelli's theorem.
Finally, the inequality holds by <ref>,
since (R_i f)(z, y+h, ·) is the monotone rearrangement of f(z, y+h, ·) and
(R_i f)(z, y, ·) is the monotone rearrangement of f(z, y, ·).
We are now ready to prove our directed (L^1, ℓ^1)-Poincaré inequality.
Let B be the box of size a ∈^n and let f : B→ be Lipschitz
continuous. Then
f - f^*1≤ 2 ∑_i=1^n a_i ∂^-_i f1 .
We have
f - f^*1 ≤∑_i=1^n R_i-1… R_1 f - R_i … R_1 f1 (Triangle inequality)
≤ 2 ∑_i=1^n a_i ∂^-_i (R_i-1… R_1 f)1 (<ref>)
≤ 2 ∑_i=1^n a_i ∂^-_i f1 (<ref>) .
Setting B = (0,1)^n yields the inequality portion of
<ref>:
Let B = (0,1)^n and let f : B→ be Lipschitz continuous. Then
*f-f^*
= f - f^*1≤ 2 ∫_B ^- f _1 ν
= 2 ^- f_1 .
To complete the proof of <ref>, we need to show that
d_1(f) ≈*f-f^*, that the monotone rearrangement is “essentially optimal”
as a target monotone function for f. The inequality d_1(f) ≤*f-f^* is clear from
the fact that f^* is monotone. The inequality in the other direction follows from the
non-expansiveness of the rearrangement operator, with essentially the same proof as that of
<cit.> for the Boolean cube:
Let f : [0,1]^n → be Lipschitz continuous. Then *f-f^*≤ 2d_1(f).
Let g ∈ L^1([0,1]^n) be any monotone function. It follows that g^* = g. By
<ref>, we have that f^*-g^*1≤f-g1. Using
the triangle inequality, we obtain
f-f^*1≤f-g1 + g-f^*1 = f-g1 + f^*-g^*1≤ 2f-g1 .
The claim follows by taking the infimum over the choice of g.
*Tightness of the inequality.
To check that <ref> is tight up to constant factors, it
suffices to take the linear function f : [0,1]^n → given by f(x) = 1-x_1 for all x ∈
[0,1]^n. Then f^* is given by f^*(x) = x_1, so f-f^* = 1/2 while ^- f_1 =
1, as needed.
§.§.§ Discrete case
The proof above carries over to the case of the hypergrid almost unmodified, as we now outline. We
now consider functions f : [m]^n →, so the box B is replaced with [m]^n and its
dimensions a_i are all replaced with the length m of the hypergrid. We define the rearrangement
in direction i, R_i f, as in <ref> by sorting the restrictions
of f to each line along direction i. We also define f^* as in
<ref> by subsequent applications of each operator R_i. Then
<ref> carries over by applying the one-dimensional
<ref>, and the proof of <ref>
carries over unmodified.
The non-expansiveness properties
<ref> also carry
over unmodified, and the key <ref> carries over with a more
immediate proof: the use of <ref> remains the
same, but rather than expanding the definition of derivative and reasoning about the limit, the
discrete argument boils down to showing the inequality
∫_[m]^n( (R_i f)(w + e_j) - (R_i f)(w) )^- ν(w)
≤∫_[m]^n( f(w + e_j) - f(w) )^- ν(w) ,
which follows immediately from the discrete version of
<ref> by summing over all lines in direction
i. Then, the hypergrid version of <ref> follows by the
same application of the triangle inequality, and we conclude the inequality portion of
<ref>:
Let f : [m]^n →. Then *f-f^*≤ 2m ^- f_1.
The discrete version of <ref> follows identically, and we state it
here for convenience:
Let f : [m]^n →. Then *f-f^*≤ 2d_1(f).
Finally, the tightness of <ref> is mostly easily verified for the
following step function: letting m be even for simplicity, define f : [m]^n → by
f(x) =
1 if x_1 ≤ m/2 ,
0 if x_1 > m/2 .
Then f^* is obtained by flipping this function along the first coordinate, or equivalently
swapping the values 1 and 0 in the definition above. Thus *f-f^* = 1. On the other
hand, ^- f_1 takes value 1 on exactly one point in each line along the first
coordinate, and 0 elsewhere. Hence ^- f_1 = 1/m, as needed.
§ APPLICATIONS TO MONOTONICITY TESTING
In this section, we use the directed Poincaré inequalities on the unit cube and hypergrid to show
that the natural partial derivative tester (or edge tester) attains the upper bounds from
<ref>.
Let Ω denote either [0,1]^n or [m]^n, and let q(Ω, L, ϵ) denote the query
complexity of testers for (ℓ^1, L)-Lipschitz functions on these domains, as follows:
q([0,1]^n, L, ϵ) Θ(nL/ϵ)
and
q([m]^n, L, ϵ) Θ(nmL/ϵ) .
The tester is given in <ref>. It is clear that this algorithm is
a nonadaptive partial derivative tester, and that it always accepts monotone functions. It suffices
to show that it rejects with good probability when d_1(f) > ϵ.
Let Ω be one of [0,1]^n or [m]^n, and let f : Ω→ be a Lipschitz
function satisfying _1(f) ≤ L. Suppose d_1(f) > ϵ. Then
<ref> rejects with probability at least 2/3.
Continuous case. Suppose Ω = [0,1]^n.
Let D ⊆ [0,1]^n be a measurable set such that f is differentiable on D and
μ(D) = 1, which exists by Rademacher's theorem. For each i ∈ [n], let
S_i { x ∈ D : ∂_i f(x) < 0 }. A standard argument gives that
each S_i ⊂^n is a measurable set. We claim that
∑_i=1^n μ(S_i) > ϵ/2L .
Suppose this is not the case. By the Lipschitz continuity of f, we have that
*∂_i f(x)≤ L for every x ∈ D and i ∈ [n], and therefore
2 ∑_i=1^n *∂^-_i f≤ 2L ∑_i=1^n μ(S_i)
≤ϵ .
On the other hand, the assumption that d_1(f) > ϵ and
<ref> yield
ϵ < *f-f^*≤ 2^- f_1
= 2∑_i=1^n *∂^-_i f ,
a contradiction. Therefore the claim holds.
Now, the probability that one iteration of the tester rejects is the probability that x ∈
S_i when x and i are sampled uniformly at random. This probability is
Iteration rejects
= ∑_j=1^n ii = jxx ∈ S_j
= ∑_j=1^n 1/n·μ(S_j)
> ϵ/2nL .
Thus Θ(nL/ϵ) iterations suffice to reject with high
constant probability.
Discrete case. Suppose Ω = [m]^n. The proof proceeds the same way, but we give
it explicitly for convenience. For each i ∈ [n], let S_i { x ∈
[m]^n : ∂_i f(x) < 0}. We then claim that
∑_i=1^n μ(S_i) > ϵ/2mL .
Indeed, if this is not the case, then since *∂_i f(x)≤ L for every i and
x, we get that
2∑_i=1^n *∂^-_i f≤ 2L ∑_i=1^n μ(S_i)
≤ϵ/m .
On the other hand, the assumption that d_1(f) > ϵ and <ref>
yield
ϵ/m < 1/m·*f-f^*≤1/m· 2m ^- f_1
= 2 ∑_i=1^n *∂^-_i f ,
a contradiction. Thus the claim holds, and the probability that one iteration of the tester
rejects is
Iteration rejects
= ∑_j=1^n ii=jxx ∈ S_j
= ∑_j=1^n 1/n·μ(S_j)
> ϵ/2nmL .
Thus Θ(nmL/ϵ) iterations suffice to reject with high constant
probability.
§ L^1-TESTING MONOTONICITY ON THE LINE
In this section, we show the upper bounds for L^1 monotonicity testing on the line from
<ref>. The main idea is to reduce from L^1 testing to Hamming testing by using
the Lipschitz constant to show that, if the L^1 distance to monotonicity is large, then the
Hamming distance to monotonicity must be somewhat large as well; combined with the Hamming testers
of <cit.>, this yields an L^1 tester for the discrete line [m].
To obtain a tester for the continuous line [0,m], we furthermore apply a discretization strategy
inspired by the domain reduction and downsampling ideas from <cit.>. The idea is that,
given ϵ and L, we may impose a fine enough grid on [0,m] such that the function defined
on that grid preserves the L^1 distance to monotonicity compared to the continuous function;
again, the Lipschitz assumption is essential for this step.
In this section, we will follow the convention of denoting functions on continuous domains by f,
g, and those on discrete domains by f, g. Depending on the context, it will
be clear whether f is an arbitrary function or one obtained by discretizing a particular
function f. We will also write “f is L-Lipschitz” without specifying the ℓ^p geometry,
since all choices are equivalent in one dimension.
Let m, L, ϵ > 0 and let f : [0,m] → be an L-Lipschitz function.
Let the discretized function f : [m'] →, for suitable choice of
m' = Θ( mL / ϵ), be given by f(i) = f(δ i)
for each i ∈ [m'], where δ m / m'.
Then if d_1(f) > ϵ, we have d_1(f) > ϵ/4.
Let m' ∈[ cmL/ϵ, 2cmL/ϵ] be an integer, where c is a
sufficiently large universal constant.[We may assume that mL/ϵ > 1,
otherwise the problem is trivial: the maximum L^1 distance from monotonicity attainable
by an L-Lipschitz function is 1/m·m · mL/2 = mL/2.
Therefore the given interval does contain an integer.] Let f : [m'] → be the function given in the statement, and suppose d_1(f) > ϵ.
Let g : [m'] → be the monotone rearrangement of f.
It is easy to check that g is Lipschitz with at most the Lipschitz constant of
f. Let g : [0,m] → be the following
piecewise linear function whose discretization is g:
for each i ∈ [m'] we set g(δ i) = g(i), and g is the linear spline
induced by these points elsewhere (and constant in the segment [0,δ]).
Then clearly g is monotone, and thus d_1(f, g) > ϵ.
Moreover, g is L-Lipschitz,
since its steepest slope is the same as that of g
up to the coordinate changes.[Formally, if f is L-Lipschitz,
then f
is L'-Lipschitz for L' = Lm/m', hence so is its monotone rearrangement g.
Then since the steepest slope of g must come from two vertices of the spline, g is
Lipschitz with Lipschitz constant L'm'/m = L.] Hence, we have
ϵ < d_1(f, g)
= 1/m∫_0^m *f(x) - g(x) x
= 1/m∑_i=1^m'∫_(i-1) δ^i δ*f(x) - g(x) x
= 1/m∑_i=1^m'∫_(i-1) δ^i δ*(f(iδ) ± Lδ) - (g(iδ) ± Lδ) x
(Lipschitz property)
≤1/m∑_i=1^m'∫_(i-1) δ^i δ[
*f(i) - g(i) + 2Lδ] x
= 1/m[
2m'Lδ^2 + δ∑_i=1^m'*f(i) - g(i)]
= 2mL/m' + 1/m'∑_i=1^m'*f(i) - g(i)≤2ϵ/c + d_1(f, g) ,
where we used the notation a ± b to denote any number in the interval [a-b, a+b].
We may set c ≥ 4 so that 2ϵ/c ≤ϵ/2. Therefore, we obtain
d_1(f, g) > ϵ/2.
Since g is the monotone rearrangement of f,
<ref> implies that
d_1(f, g) ≤ 2 d_1(f). We conclude that
d_1(f) > ϵ/4, as desired.
The function f defined in <ref> is
ϵ-Lipschitz: since m' ≥ mL/ϵ, we have
*f(i) - f(i+1)
= *f(δ i) - f(δ (i+1) )≤ L δ = Lm/m' ≤ϵ .
Let f : [m'] → be an L'-Lipschitz function.
Then d_0(f) ≥√(d_1(f)/m'L').
Let S ⊆ [m'] be a set such that 1) |S| = d_0(f) m' and 2) it suffices to
change f on inputs in S to obtain a monotone function; note that S exists by
definition of Hamming distance. Write S as the union of maximal, pairwise disjoint contiguous
intervals, S = I_1 ∪…∪ I_k.
We define a monotone function g : [m'] → as follows. For each i ∈ S, set
i^* ∈ [m'] ∖ S as follows: if there exists j ∈ [m'] ∖ S such that j >
i, pick the smallest such j; otherwise, pick the largest j ∈ [m'] ∖ S. In other
words, i^* is obtained by picking a direction (right if possible, otherwise left) and choosing
the first point outside the interval I_k that contains i. Now, define g by
g(i) = f(i) if i ∉S
f(i^*) if i ∈ S .
We first claim that g is monotone. Indeed the sequence of values
( f(i) )_i ∈ [m'] ∖ S (taken in order of increasing i) is monotone
by our first assumption on S, and since g is obtained by extending some of these
values into flat regions, the resulting function is also monotone.
Therefore we can upper bound the L^1 distance of f to monotonicity by
d_1(f) ≤ d_1(f, g)
= 1/m'∑_i=1^m'*f(i) - g(i)
= 1/m'∑_j=1^k ∑_i ∈ I_j*f(i) - f(i^*)
= 1/m'∑_j=1^k ∑_i ∈ I_j*( f(i^*) ± L' *i-i^*) - f(i^*) (Lipschitz property)
≤L'/m'∑_j=1^k ∑_i ∈ I_j*i-i^*≤L'/m'∑_j=1^k ∑_i ∈ I_j*I_j
= L'/m'∑_j=1^k *I_j^2
≤L'/m'· |S|^2
(Since *I_1 + … + *I_k = |S|)
= L' d_0(f)^2 m' .
The claim follows.
Combining the two lemmas with the classical Hamming monotonicity tester of <cit.>, the
following theorem establishes <ref> for the continuous domain [0,m]:
There exists a nonadaptive one-sided L^1 monotonicity tester for L-Lipschitz functions f :
[0,m] → with query complexity O( √(mL/ϵ)log(
mL/ϵ) ).
The tester works as follows. It first fixes m' = Θ(mL/ϵ) as given by
<ref>. Let f : [m'] → be the discretization
defined therein (f is not explicitly computed upfront, but will rather
be queried as needed). The algorithm then simulates the (nonadaptive, one-sided)
monotonicity tester of <cit.> on
the function f with proximity parameter
ϵ' = Θ( √(ϵ/mL)) (the constant may easily be
made explicit), producing f(δ i) = f(im/m') whenever the simulation queries
f(i). The algorithm returns the result produced by the simulated tester.
The query complexity claim follows from the fact that the tester of <cit.> has query
complexity O( 1/ϵ'log m' ).
We now show correctness. When f is monotone, so is f, so the algorithm will
accept since the tester of <cit.> has one-sided error. Now, suppose
d_1(f) > ϵ. Then d_1(f) > ϵ/4 by
<ref>. Moreover, since f is ϵ-Lipschitz
by <ref>, <ref> implies that
d_0(f) ≥√(d_1(f)/m' ϵ)
> √(1/4 m') = Ω( √(ϵ/mL)) .
Since this is the proximity parameter ϵ' used to instantiate the <cit.> tester,
the algorithm will reject with high constant probability, as needed.
<ref> itself also implies <ref> for the discrete domain [m].
This time, we use the Hamming tester of <cit.> to obtain a slightly more precise query
complexity bound[One may check that this refinement would have no effect in
<ref>].
There exists a nonadaptive one-sided L^1 monotonicity tester for L-Lipschitz functions
f : [m] → with query complexity
O( √(mL/ϵ)log( m ϵ/L) ) when
ϵ/L ≥ 4/m, and O(m) otherwise.
The tester sets ϵ' √(ϵ/mL), and then runs the (nonadaptive,
one-sided) Hamming monotonicity tester of <cit.> on the line [m] with proximity
parameter ϵ'. That tester has query complexity O(1/ϵ'log(ϵ' m)) when ϵ' ≥ 2/m and (trivially) O(m) otherwise, which gives
the claimed upper bounds. It remains to show correctness.
When f is monotone, the algorithm will accept since the tester of <cit.> has
one-sided error. Now, suppose d_1(f) > ϵ. Then <ref>
yields
d_0(f) > √(ϵ/mL) = ϵ' ,
so the tester of <cit.> will reject with high constant probability.
§ LOWER BOUNDS
In this section, we prove our lower bounds for testing monotonicity on the unit cube and on the
hypergrid. We first show our general lower bounds based on a “hole” construction, which hides a
monotonicity violating region inside a randomly placed ℓ^1-ball; these bounds imply near
tightness of our upper bounds for testing on the line from <ref>. Then we give our lower
bounds for partial derivative testers, which show that the analysis of our tester in
<ref> is tight.
Let Ω be one of ^n or ^n, let x ∈Ω and let r > 0 be a real number.
The ℓ^1-ball of radius r centered at x is the set
B_1^n(r, x) { y ∈Ω : x-y_1 ≤ r }. We will also write B_1^n(r)
B_1^n(r, 0).
It will be clear from the context whether the domain should be taken to be continuous or discrete,
whether B_1^n(r,c) should be understood under Ω = ^n or Ω = ^n.
We give the following simple bounds on the volume of continuous and discrete ℓ^1-balls. Since
we do not require particularly tight bounds, we opt for a simple formulation and elementary proof.
There exist functions c_1, c_2 : →_>0 satisfying the following. Let n ∈.
Let Ω be one of ^n or ^n. Let r ∈
satisfy r > 0 if Ω = ^n, and r ≥ 1 if Ω = ^n. Then
c_1(n) r^n ≤ν( B_1^n(r) ) ≤ c_2(n) r^n .
First suppose Ω = ^n. Then we have the following formula for the area of the
ℓ^1-ball of radius r (see <cit.>):
ν( B_1^n(r) ) = (2r)^n/n! .
The result follows by letting c_1(n) ≤ 2^n / n! and c_2(n) ≥ 2^n / n!.
Now, suppose Ω = ^n, and suppose r is an integer without loss of generality (because
since r ≥ 1, there exist integers within a factor of 2 above and below r). We proceed by
an inductive argument. For n=1, the volume is
ν( B_1^1 ) = 1 + 2∑_d=1^r 1 = 1 + 2r ,
so the claim holds by letting c_1(1) ≤ 2 and c_2(n) ≥ 3. Assuming the claim for some n
∈, we have
ν( B_1^n+1(r) )
= *{ x = (x_1, …, x_n, x_n+1): x_1 ≤ r }
= ∑_y=-r^r *{ x' = (x'_1, …, x'_n) : x'_1 ≤ r-y}
= ν( B_1^n(r) ) + 2∑_d=1^r ν( B_1^n(r-d) ) .
Since the last expression is at most 3r ·ν( B_1^n(r) ),
using the inductive hypothesis we conclude
ν( B_1^n+1(r) )
≤ 3r · c_2(n) r^n
= 3c_2(n) r^n+1≤ c_2(n+1) r^n+1 ,
the last inequality as long as c_2(n+1) ≥ 3c_2(n).
For the lower bound, we consider two cases. Note that r-d ≥ r/2 for at least r/2
values of d. When r ≥ 4, we have r/2≥ r/3, and then
ν( B_1^n+1(r) )
≥ c_1(n) r^n + 2∑_d=1^r c_1(n) (r-d)^n
> 2r/3· c_1(n) (r/2)^n
≥ c_1(n+1) r^n+1 ,
the last inequality as long as c_1(n+1) ≤2/3· 2^-n· c_1(n). On the
other hand, if r < 4, the bound follows easily for small enough c_1(n+1), since
ν( B_1^n+1(r) )
≥ c_1(n) r^n + 2∑_d=1^r c_1(n) (r-d)^n
> c_1(n) r^n+1/r
> c_1(n)/4 r^n+1 .
Note that the constants c_1(n) and c_2(n) in <ref> have poor dependence
on n, and in particular this is tight in the continuous case. This fact is essentially the
reason why this construction is only efficient for constant dimension n.
We now prove our tester-independent lower bounds. Note that there exists a tester for (ℓ^1,
L)-Lipschitz functions with proximity parameter ϵ if and only if there exists a tester for
(ℓ^1, 1)-Lipschitz functions with proximity parameter ϵ/L (the reduction consists of
simply rescaling the input values). Therefore it suffices to prove the theorems for the case L=1.
The following two theorems establish the continuous and discrete cases of
<ref>.
Let n ∈ be a constant. Any L^1 monotonicity tester (with two-sided error, and adaptive
value and directional derivative queries) for Lipschitz functions f : [0,1]^n →
satisfying _1(f) ≤ 1 requires at least Ω((1/ϵ)^n/n+1)
queries.
We construct a family of functions that are ϵ-far from monotone in L^1 distance such
that any deterministic algorithm cannot reliably distinguish between a function chosen uniformly
at random from this family and the constant-0 function with fewer than the announced number of
queries; then, the claim will follow from Yao's principle.
Each such function f is constructed as follows. Let c ∈ [0,1]^n be a point such that the
ball B_1^n(r, c) is completely inside [0,1]^n, for radius r to be chosen below. Then f
takes value 0 everywhere outside B_1^n(r, c), and inside this ball, it takes value
f(x) = -r + x-c_1
for each x ∈ B_1^n(r,c). Then _1(f) = 1. We now lower bound d_1(f), its distance to
monotonicity. Fix any x' ∈ [0,1]^n-1 and consider the line of points (y, x') for y ∈
[0,1], the line along the first coordinate with remaining coordinates set to x'. Suppose
this line intersects B_1^n(r,c). Then this intersection occurs on some interval [a,b] of
y-values, and on this interval, f first decreases from f(a, x') = 0 to
f(a+b/2, x') = -b-a/2 at rate 1, and then increases at rate 1
back to f(b, x') = 0. Any monotone function g is in particular monotone over this line, and
it is easy to see that this requires total change to f proportional to the area under this
curve:
∫_0^1 *f(y,x') - g(y,x') y ≳∫_0^1 *f(y,x') y .
Now, since this holds for any line intersecting B_1^n(r,c), and the collection of such lines
gives a partition of B_1^n(r,c), the total distance between f and any monotone function g
is lower bounded (up to a constant) by the L^1-norm of f:
∫_[0,1]^n*f-gν≳∫_[0,1]^nfν ,
and since this holds for any choice of g, we conclude that
d_1(f) ≳∫_[0,1]^nfν .
We now note that this last expression is half the volume of an ℓ^1-ball in dimension n+1:
for each point x ∈ B_1^n(r,c), the contribution to the integrand is *f(x) = r -
x-c_1, corresponding to the measure of points (x,z') for all 0 ≤ z' ≤ z where z =
r - x-c_1, so that the point (x,z) ∈^n+1 satisfies (x,z) - (c,0)_1 = r. In
other words, the points (x,z') are the points of B_1^n+1(r,(c,0)) with nonnegative last
coordinate. Conversely, all such points contribute to the integral above. Therefore, since n
is a constant, using <ref> and writing ν^n+1 for the Lebesgue measure
on ^n+1, we have
d_1(f) ≳∫_[0,1]^nfν≳ν^n+1(B_1^n+1(r))
≳ r^n+1 .
We wish this last quantity to be at least Ω(ϵ), so (recalling n is a constant)
it suffices to set
r ≈ϵ^1/n+1 .
We have established that each function f, for this choice of r and any choice of c, is
ϵ-far from monotone as desired. Our family of functions from which f will be drawn
will be given by choices of c such that the balls B_1^n(r,c) are disjoint, so that each
query may only rule out one such choice (because queries outside B_1^n(r,c) take value 0).
How many disjoint balls B_1^n(r,c) can we fit inside [0,1]^n? It suffices to divide
[0,1]^n into a grid of n-dimensional cells of side 2r, each of which can contain one ball.
The number of such cells is at least (up to a constant factor)
(1/r)^n ≳ (1/ϵ)^n/n+1 .
Therefore to distinguish some f uniformly drawn from this family from the constant-0
function with constant probability, any deterministic algorithm must have query complexity at
least Ω((1/ϵ)^n/n+1).
Let n ∈ be a constant.
Any L^1 monotonicity tester (with two-sided error and adaptive queries) for functions f :
[m]^n → satisfying _1(f) ≤ 1 requires at least Ω( min{
(m/ϵ)^n/n+1, m^n }) queries.
We proceed similarly to <ref>, with small changes for the
discrete setting (essentially corresponding to the requirement that r ≥ 1 in the discrete
case of <ref>).
We will again construct functions f based on balls B_1^n(r,c) for suitable choices of r
and c. For fixed r and c, f takes value 0 outside the ball and, for each x ∈
B_1^n(r,c),
f(x) = -r + x-c_1 ,
so that _1(f) = 1. Again by a line restriction argument, for any monotone function g we
have
∫_[m]^n*f-gν≳∫_[m]^nfν ,
and thus
d_1(f) ≳1/m^n∫_[m]^nfν ,
the normalizing factor due to <ref>.
When ϵ≤ 1/m^n, this construction boils down to setting f(x)=-1 at a single point
x, which requires Ω(m^n) queries to identify. Now, assume ϵ > 1/m^n.
Again we may identify the integrand of (<ref>) with points on half of
B_1^n+1(r,(c,0)). As long as r ≥ 1 and since n is a constant,
<ref> implies that
d_1(f) ≳r^n+1/m^n .
Thus to have d_1(f) ≥ϵ, it suffices (since n is a constant) to set
r ≈ m^n/n+1ϵ^1/n+1 ,
and indeed this gives r ≥ 1 since ϵ > 1/m^n.
Then, our functions f are given by choices of c placed on the hypergrid [m]^n inside
disjoint cells of side 2r, of which there are at least (up to a constant factor)
( m/r)^n ≳( m/ϵ)^n/n+1 ,
and thus any deterministic algorithm requires Ω((m/ϵ)^n/n+1)
queries to distinguish a uniformly chosen f from this family from the constant-0 function.
The construction for the partial derivative tester lower bounds is simpler: we start with a “step”
one-dimensional construction which is flat everywhere except for a small region of negative slope,
and then copy this function onto every line along a randomly chosen coordinate i. Then a
partial derivative tester must correctly guess both i and the negative-slope region to detect such
functions. The following two theorems establish the continuous and discrete cases of
<ref>.
Any partial derivative L^1 monotonicity tester for Lipschitz functions f : [0,1]^n →
satisfying _1(f) ≤ 1 (with two-sided error and adaptive queries) requires at least
Ω(n/ϵ) queries.
Let ϵ≤ 1/6. For any z ∈[ 1/3, 2/3-ϵ], let
g_z : [0,1] → be the function given by
g(x) = ϵ if x < z ,
ϵ - (x-z) if z ≤ x ≤ z + ϵ ,
0 if x > z + ϵ .
Note that g_z is Lipschitz with _1(g) = 1. Moreover, we claim that d_1(g_z) ≳ϵ. Indeed, for any x ∈ [0, 1/3], we have that g_z(x) = ϵ and g_z(2/3 + x)
= 0. On the other hand, for any monotone function h : [0,1] → we must have h(x) ≤
h(2/3 + x). Thus, for any such h we have *g_z(x) - h(x) + *g_z(2/3 + x) - h(2/3 +
x)≥ϵ. Since this holds for all x ∈ [0, 1/3], we conclude that for any such
h we must have *g_z - h≥ϵ/3, proving the claim.
Now, for any i ∈ [n] and z ∈[ 1/3, 2/3-ϵ], let
f_i,z : [0,1]^n → be given by copying g_z onto f along every line in direction
i, setting f_i,z(x) = g_z(x_i) for every x ∈ [0,1]^n. Note that _1(f) = 1
(since its partial derivatives are 0 along non-i coordinates), and d_1(f) ≳ϵ
(since the lines in direction i partition the domain).
We construct a set of Ω(n/ϵ) functions f_i,z as follows. First, i can be any
of the coordinates in [n]. Then let z_1, …, z_k be given by z_j = 1/3 +
kϵ for k = Ω(1/ϵ), such that for each j we have z_j ∈[
1/3, 2/3-ϵ] and, moreover, for distinct j, ℓ∈ [k],
the regions where f_i,z_j and f_i,z_ℓ take non-zero slope are disjoint. It
follows that each partial derivative query may only rule out one such f_i,z, so any partial
derivative tester that distinguishes an f_i,z chosen uniformly at random from the
constant-0 function must make at least Ω(n/ϵ) queries.
The argument for the hypergrid is similar, except that the construction cannot be made to occupy an
arbitrarily small region of the domain when the domain is discrete. We opt to keep the argument
simple and give a proof for constant parameter ϵ.
For sufficiently small constant ϵ, any partial derivative L^1 monotonicity tester for
functions f : [m]^n → satisfying _1(f) ≤ 1 (with two-sided error and adaptive
queries) requires at least Ω(nm) queries.
Let m be a multiple of 3 for simplicity. For each
z ∈{m/3+1, …, 2m/3}, define g_z : [m] → by
g_z(x) =
1 if x < z ,
0 if z ≥ z .
Then _1(g_z) = 1 and, as before, we have d_1(g_z) = Ω(1). Then for each i ∈ [n]
and z ∈[ m/3+1, 2m/3], we let f_i,z : [m]^n → be
given by f_i,z(x) = g_z(x_i) for each x ∈ [m]^n; it follows that _1(f) = 1 and
d_1(f) = Ω(1). Note that there are Ω(nm) such functions. Moreover, each partial
derivative query may only rule out one such f_i,z, and therefore any edge tester that
distinguishes an f_i,z chosen uniformly at random from the constant-0 function must make
at least Ω(nm) queries.
§ OVERVIEW OF PRIOR WORKS ON MONOTONICITY TESTING
We first summarize results on testing monotonicity with respect to the Hamming distance.
*Boolean-valued functions. Among the early works on this problem, <cit.> gave
testers for functions on the hypergrid [m]^n with query complexities O(nlog(m)/ϵ) and
O((n/ϵ)^2); note that the latter bound is independent of m, and the query complexity of
testers with this property was subsequently improved to O((n/ϵ) log^2(n/ϵ)) by
<cit.> and to O((n/ϵ) log(n/ϵ)) by <cit.>. For functions on the
Boolean cube ^n, <cit.> gave the first o(n) tester, subsequently improved by
<cit.>, culminating in the O(√(n)/ϵ^2) tester of <cit.>, which
essentially resolved the question for nonadaptive testers. Whether adaptivity helps in monotonicity
testing is still an open question; see the lower bounds below, and also <cit.>.
Returning to hypergrid domains [m]^n, <cit.> established first testers with o(n)
query complexity and, via a domain reduction technique, also obtained o(n) testers for product
distributions on ^n (and the alternative proof of <cit.> improves the number of
samples drawn by the tester when the distribution is unknown). Subsequent works
<cit.> attained the optimal dependence on n at the cost of a dependence on m, with
upper bounds of the form O(√(n)(m)). Most recently, <cit.> gave a
tester with query complexity O(n^1/2+o(1)/ϵ^2), which is almost optimal for nonadaptive
algorithms, and again extends to product measures on ^n.
*Real-valued functions. <cit.> gave a tester with query complexity
O(log(m)/ϵ) for real-valued functions on the line [m]; the tight query complexity of
this problem was more recently shown to be Θ(log(ϵ m)/ϵ) <cit.>. As for
functions on the hypergrid [m]^n, <cit.> also gave testers for larger ranges, but
the query complexity depends on the size of the range. Then, <cit.> gave a nonadaptive tester
with one-sided error and (optimal) query complexity O(nlog(m)/ϵ).
On the Boolean cube, <cit.> gave a tester with query complexity O(
min{ r√(n)/ϵ^2, n/ϵ}) for real-valued functions f with
image size r, and showed that this is optimal (for constant ϵ) for nonadaptive testers
with one-sided error.
*Lower bounds. We briefly summarize the known lower bounds for these problems; all lower
bounds listed are for testers with two-sided error unless noted otherwise. For Boolean functions on
the Boolean cube ^n, there is a near-optimal lower bound of Ω(√(n)) for
nonadaptive testers <cit.>, which improves on prior results of <cit.>.
For adaptive testers, <cit.> gave the first polynomial lower bound of Ω(n^1/4), since improved to Ω(n^1/3) by <cit.>.
Turning to real-valued functions, <cit.> combined Ramsey theory arguments with a result of
<cit.> to show a Ω(log m) lower bound for adaptive testers on the line [m]. On the
Boolean cube, <cit.> gave a Ω(n/ϵ) nonadaptive one-sided lower bound, and
<cit.> gave an adaptive lower bound of Ω(n). On the hypergrid, <cit.> gave a
nonadaptive lower bound of Ω(nlog m) by communication complexity arguments, <cit.>
showed the optimal lower bound of Ω(nlog(m)/ϵ - log(1/ϵ)/ϵ) for
adaptive testers using Ramsey theory (which involves functions with large range), and <cit.>
gave an alternative proof of this bound that does not use Ramsey theory.
*L^p-testing. Finally, moving from Hamming testers to L^p testers, and assuming
functions with range [0,1], <cit.> (who formally introduced this model) gave nonadaptive
L^p monotonicity testers with one-sided error on the hypergrid [m]^n with query complexity
O((n/ϵ^p)log(n/ϵ^p))—note this is independent of m, bypassing the Hamming
testing lower bound—and a lower bound of Ω((1/ϵ^p)log(1/ϵ^p)) for
nonadaptive testers with one-sided error; on the line, they showed there is an O(1/ϵ^p)
nonadaptive tester with one-sided error and a matching lower bound for adaptive testers with
two-sided error. They also gave a reduction from L^p monotonicity testing to Hamming testing of
Boolean functions for nonadaptive one-sided testers, so in particular L^1 testing functions with
range [0,1] is no harder than Hamming testing functions with Boolean range.
* We also remark that our problem, which is parameterized by the upper bound L on the
Lipschitz constant of input functions, lies under the umbrella of parameterized property testing,
and refer to <cit.> for an introduction to, and results on this type of tester.
anonymous
*Acknowledgments. We thank Eric Blais for helpful discussions throughout the course of
this project, and for comments and suggestions on preliminary versions of this paper.
alpha
§ BACKGROUND ON ISOPERIMETRIC INEQUALITIES
Let us trace an extremely brief history of developments that are most relevant to our study of
isoperimetric inequalities. We start with the original work of Poincaré <cit.>, which yields
inequalities of the type f - f_p ≤ C(Ω) f _p for sufficiently smooth
domains Ω and sufficiently integrable functions f[More precisely, for f in the
appropriate Sobolev space.]. The optimal constant C(Ω), also called the Poincaré
constant of Ω, depends on properties of this domain[For example, it is
characterized by the first nontrivial eigenvalue of the Laplacian operator on smooth bounded
Ω. There are additional considerations relating the assumptions made of f on the boundary
∂Ω and the (Dirichlet or Neumann) boundary conditions associated with the Laplacian;
see <cit.> for a survey.], and often the goal is to establish the sharp constant for families
of domains Ω. <cit.> and <cit.> (see also <cit.>) showed that, for convex
domains, C(Ω) is essentially upper bounded by the diameter of Ω, and this bound is
tight in general. However, for specific structured domains such as the product domain [0,1]^n, the
diameter characterization falls short of yielding a dimension-free inequality (see also the
literature on logarithmic Sobolev inequalities <cit.>).
Making progress on this front in the discrete setting, the landmark work of Talagrand <cit.>
established inequalities like the above for domain Ω = ^n, with C = C(Ω)
independent of n, and established connections with earlier works of Margulis on graph connectivity
and Pisier on probability in Banach spaces. (More recently, Fourier-analytic proofs of Talagrand's
inequality have also been given <cit.>.) In continuous settings, similar results were first
established for the Gaussian measure in connection with the Gaussian isoperimetric inequality
<cit.>. Tying back to our present settings of interest, Bobkov and
Houdré <cit.> showed that a dimension-independent Poincaré-type inequality also holds for
product measures in ^n, including the uniform measure on [0,1]^n, as shown in
(<ref>).
As introduced in the opening, it is these dimension-free Poincaré inequalities for discrete and
continuous product measures whose directed analogues have implications for the structure of monotone
functions and therefore for property testing <cit.>. To enrich the summary laid
out in <ref>, we present additional related inequalities recently shown by
<cit.> in <ref>, and briefly explain them here. These
inequalities have unlocked algorithmic results for testing monotonicity of real-valued functions on
the Boolean cube, and Boolean-valued functions on the hypergrid, as summarized in
<ref>.
Define the vector-valued operators ϕ and ϕ^- on functions f : [m]^n → as follows:
for each x ∈ [m]^n and i ∈ [n],
(ϕ f(x))_i ∃ y : (x ≼_i y or y ≼_i x) and f(x) f(y)
,
(ϕ^- f(x))_i
(∃ y : x ≼_i y, f(x) > f(y))
or
(∃ y : y ≼_i x, f(y) > f(x))
,
where we write x ≼_i y if x_j = y_j for every j i, and x_i ≤ y_i. Compared to
the gradient, these operators 1) are only sensitive to the order relation between function values
(which suits the setting of Hamming testing); and 2) capture “long range” violations of
monotonicity (accordingly, the corresponding hypergrid testers are not edge testers).
See <ref> for a remark on the nuances of inner/outer boundaries and robust
inequalities.
§ UPPER BOUNDS FROM <CIT.> APPLIED TO LIPSCHITZ FUNCTIONS
In this section, we show how the L^1 monotonicity testing upper bounds from <cit.> imply
testers with query complexity O( n^2 L/ϵ) on the unit cube
and O( n^2 mL/ϵ) on the hypergrid for functions f
satisfying _1(f) ≤ L. We start with the case of the hypergrid, and first state the upper
bound of <cit.> for functions with arbitrary range of size r, which without loss of
generality (by translation invariance) we denote [0,r]:
There exists an L^1 monotonicity tester for functions f : [m]^n → [0,r] that uses O(
rn/ϵlogrn/ϵ) value queries. The tester is nonadaptive
and has one-sided error.
As explained in <ref>, the extra factor of r compared to the bounds
stated in <cit.> accounts for the conversion between range [0,r] and range [0,1], which
affects the proximity parameter ϵ by a factor of r: testing functions with range [0,r]
for proximity parameter ϵ is equivalent to testing functions with range [0,1] for
proximity parameter ϵ/r.
We thus obtain the following L^1 monotonicity tester for Lipschitz functions on the hypergrid:
There is an L^1 monotonicity tester for functions f : [m]^n → satisfying _1(f)
≤ L that uses O( n^2 mL/ϵlog( nmL/ϵ)
) value queries. The tester is nonadaptive and has one-sided error.
The algorithm simulates the tester from <ref> with parameters r = Lmn and
ϵ, which gives the announced query complexity. As for correctness, note that since
_1(f) ≤ L, for any x, y ∈ [m]^n we have *f(x)-f(y)≤ Lx-y_1 ≤ Lmn.
Thus f has range of size at most Lmn, so the reduction to the tester from
<ref> is correct.
We outline how this result implies a tester for Lipschitz functions on the unit cube as well, via
the domain reduction or downsampling principle of <cit.>. Even though the tester above
has query complexity that depends on m, the main observation is that, given an (ℓ^1,
L)-Lipschitz function on [0,1]^n, we may discretize it into an (ℓ^1, L/m)-Lipschitz function
on an arbitrarily fine hypergrid [m]^n. Then the term m(L/m) = L remains fixed for any choice of
m, so the complexity of the tester above does not depend on m in this reduction. Finally, by
setting m large enough, we may upper bound the error introduced by the discretization.
There is an L^1 monotonicity tester for functions f : [0,1]^n → satisfying _1(f)
≤ L that uses O( n^2 L/ϵlog( nL/ϵ) )
value queries. The tester is nonadaptive and has one-sided error.
For sufficiently large m to be chosen below, the algorithm imposes a hypergrid [m]^n
uniformly on [0,1]^n. More precisely, we define a function f' : [m]^n → via f'(x) =
f(x/m) for all x ∈ [m]^n. Note that _1(f') ≤ L/m. Then, the algorithm simulates the
tester from <ref> on f' with proximity parameter
Ω(ϵ), and returns its result. Note that the query complexity is as desired, so it
remains to show correctness. If f is monotone so is f', in which case the algorithm accepts.
Now assume that d_1(f) > ϵ, and we need to show that d_1(f') ≳ϵ.
Let g' : [m]^n → be any monotone function. We obtain a monotone function g : [0,1]^n
→ as follows: for each x ∈ [0,1]^n, let x be obtained by rounding up each
coordinate x_i to a positive integral multiple of 1/m. Then x' m x∈
[m]^n, and we set g(x) g'(x'). It follows that g is monotone, and by the triangle
inequality,
f-g≤f'-g' + max_x ∈ [0,1]^n*f(x) - f(x)≤f'-g' + Ln/m .
In the first inequality, we separately account for the cost of turning each value of f into
the value at its corresponding “rounded up” point (accounted for by the second summand), and
then the cost of turning each equal-sized, constant-valued cell into the value of g on that
cell, and these values agree with f' and g' (accounted for by the first summand). In the
second inequality, we use the fact that any point x satisfies x - x_1 ≤1/m· n, along with the Lipschitz assumption on f.
Therefore, by setting m > 10Ln/ϵ, we obtain Ln/m <
ϵ/10, and since the inequality above holds for every monotone function g', we
conclude that d_1(f') ≥ d_1(f) - ϵ/10 > 9ϵ/10, as desired.
One may wonder whether the reductions above could yield more efficient testers if combined with
Hamming testers for Boolean functions on the hypergrid with better dependence on n (via the
reduction from L^1 testing to Hamming testing of <cit.>, which is behind
<ref>), since the tester of <cit.> has query complexity O(n^1/2+o(1)/ϵ^2). However, it seems like this is not the case, <ref> has the best query complexity of any reduction that upper
bounds the size of the range of f by Lmn. The reason is as follows: <cit.> showed
that any nonadaptive, one-sided pair tester for the Boolean cube with query complexity
O(n^α/ϵ^β) must satisfy α + β/2≥3/2, so
hypergrid testers must also satisfy this as well as β≥ 1, assuming query complexity
independent of m. Then, given a hypergrid tester for Boolean functions with query complexity
O(n^α/ϵ^β), our reduction via the inferred range size r = Lnm
gives an L^1 tester with asymptotic query complexity at least
n^α/(ϵ/r)^β = n^α+β/2 n^β/2
(mL)^β/ϵ^β≥n^2 mL/ϵ.
§ LOWER BOUND FROM <CIT.> APPLIED TO L^1 TESTING
We briefly explain how the Ω(nlog m) nonadaptive lower bound of <cit.> for Hamming
testing monotonicity of functions f : [m]^n → (with sufficiently small constant ϵ)
also applies to L^1-testing functions satisfying _1(f) ≤ O(1). The construction of
<cit.> relies on two main ingredients: step functions and Walsh functions.
Let m = 2^ℓ for simplicity. For each i ∈{0, …, m} the i-th step function s_i :
[2^ℓ] → [2^ℓ-i] is given by
s_i(x) = ⌊x-1/2^i⌋ + 1 .
In words, s_i(x) increases by 1 after every 2^i consecutive elements (called a block of
size 2^i).
The Walsh functions are defined as follows. For each i ∈ [ℓ], the function w_i : [2^ℓ]
→ is given by
w_i(x) = (-1)^𝖻𝗂𝗍_i(x-1) ,
where the operator 𝖻𝗂𝗍_i extracts the i-th bit of its input (indexed from least to most
significant). Then for each S ⊆ [ℓ], the function w_S : [2^ℓ] → is given
by
w_S(x) = ∏_i ∈ S w_i(x) .
These two types of functions are defined on the line, and they are extended into a multidimensional
construction on the hypergrid as follows. Given a vector i∈ [ℓ]^n, the step function
s_i : [2^ℓ]^n → [n 2^ℓ] is given by
s_i(x_1, …, x_n) = ∑_j=1^n s_i_j(x_j) ,
and given a vector S = (S_1, …, S_n) of subsets of [ℓ], the Walsh
function w_S : [2^ℓ]^n → is given by
w_S(x_1, …, x_n) = ∏_j=1^n w_S_j(x_j) .
Then, <cit.> use a communication complexity argument (namely a reduction from the
AugmentIndex problem) to show that (nonadaptive) Hamming testing monotonicity of functions
h_i,S : [2^ℓ]^n → of the form
h_i,S(x) = 2s_i(x) + w_S(x) ,
for appropriate choices of i and S, requires at least Ω(nlog m) queries.
Therefore, to show that (nonadaptive) L^1 testing monotonicity of (ℓ^1, O(1))-Lipschitz
functions also requires at least this number of queries, it suffices to show that every such
function h = h_i,S satisfies
* _1(h) ≤ O(1); and
* If d_0(h) ≥ϵ, then d_1(h) ≳ϵ.
The first property follows from the definitions of the step and Walsh functions: let x, y ∈
[2^ℓ]^n be such that x-y_1 = 1. Then let j ∈ [ℓ] be the coordinate such that
*x_j - y_j = 1 and x_k = y_k for k j. Then
*h_i,S(x) - h_i,S(y)≤ 2*s_i_j(x_j) - s_i_j(y_j) + *w_S(x) - w_S(y)≤ 2 + 2
= 4 ,
the first inequality because x and y agree on every coordinate except for j, and the second
inequality because the step function s_i_j changes by at most 1 on adjacent inputs, and
the Walsh functions only take values ± 1. Thus _1(h_i,S) ≤ 4, as desired.
As for the second property, note that if d_0(h) ≥ϵ, then there exists a matching of the
form (x^i, y^i)_i where for each i we have x^i ≼ y^i and h(x^i) > h(y^i) (x^i,
y^i form a violating pair), such that at least an ϵ-fraction of the points [m]^n belong
to this matching (see <cit.>). Now, for each such violating pair x^i, y^i, it follows
that h(x^i) - h(y^i) ≥ 1, since h is integer-valued. Therefore for any monotone function h' :
[m]^n →, it must be the case that *h(x^i) - h'(x^i) + *h(y^i) - h'(y^i)≥ 1.
Since this is true for disjoint pairs x^i, y^i covering an ϵ-fraction of the domain, it
follows that d_1(h, h') ≥ϵ / 2 for any monotone function h'. Hence d_1(h) ≥ϵ/2, and we are done.
|
http://arxiv.org/abs/2307.01107v1
|
20230703153436
|
Sampling the lattice Nambu-Goto string using Continuous Normalizing Flows
|
[
"Michele Caselle",
"Elia Cellini",
"Alessandro Nada"
] |
hep-lat
|
[
"hep-lat",
"cs.LG",
"hep-th"
] |
=10000
0 in
0 in
0.75 in
6.375 true in
42
0pt
=1
JHEP.bst
|
http://arxiv.org/abs/2307.01610v1
|
20230704095033
|
Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction
|
[
"Zitao Chen",
"Karthik Pattabiraman"
] |
cs.CR
|
[
"cs.CR",
"cs.AI",
"cs.LG"
] |
IEEEpubidpullup6.5
Network and Distributed System Security (NDSS) Symposium 2024
26 February - 1 March 2024, San Diego, CA, USA
ISBN 1-891562-93-2
https://dx.doi.org/10.14722/ndss.2024.23014
www.ndss-symposium.org
[]
Overconfidence is a Dangerous Thing:
Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction
Zitao Chen
University of British Columbia
[email protected]
Karthik Pattabiraman
University of British Columbia
[email protected]
=========================================================================================================================================
Machine learning (ML) models are vulnerable to membership inference attacks (MIAs), which determine whether a given input is used for training the target model.
While there have been many efforts to mitigate MIAs, they often suffer from limited privacy protection, large accuracy drop, and/or requiring additional data that may be difficult to acquire.
This work proposes a defense technique, that can achieve both strong membership privacy and high accuracy, without requiring extra data.
To mitigate MIAs in different forms, we observe that they can be unified as they all exploit the ML model's overconfidence in predicting training samples through different proxies.
This motivates our design to enforce less confident prediction by the model, hence forcing the model to behave similarly on the training and testing samples.
consists of a novel training framework with high-entropy soft labels and an entropy-based regularizer to constrain the model's prediction while still achieving high accuracy.
To further reduce privacy risk, uniformly modifies all the prediction outputs to become low-confidence outputs while preserving the accuracy, which effectively obscures the differences between the prediction on members and non-members.
We conduct extensive evaluation on five benchmark datasets, and show that provides consistently high accuracy and strong membership privacy.
Our comparison with seven state-of-the-art defenses shows that achieves a superior privacy-utility trade off than those techniques.
§ INTRODUCTION
Machine learning (ML) models are often trained with the sensitive or private user data like clinical records <cit.>, financial information <cit.> and personal photos <cit.>.
Unfortunately, ML models can also unwittingly leak private information <cit.>.
One prominent example is Membership inference attacks (MIAs) <cit.>, which determine whether an input is used for training the target model,
Hence, MIAs constitute a fundamental threat to data privacy.
For instance, by knowing that an individual's clinical record was used to train a hospital's diagnostic model, the adversary can directly infer his/her health status.
MIAs exploit the ML model's differential behaviors on members and non-members
<cit.>.
Members are the samples used to train the model (i.e., training samples) and non-members are the samples not used for training (e.g., testing samples).
Existing MIAs can be divided into score-based <cit.> and label-only attacks <cit.>, where the former requires access to the model's output score indicating the class probability, while the latter needs only the prediction label.
These attacks all seek to learn distinctive statistical features from the model's predictions in different ways, such as training an attack inference model <cit.>, computing metrics like prediction loss <cit.> and entropy <cit.>, or using Gaussian likelihood estimate <cit.>.
Defenses against MIAs can be categorized into provable and practical defenses.
Provable defenses provide provable
guarantees through differential privacy (DP) <cit.>, but they often incur severe accuracy degradation.
Practical defenses, instead, offer empirical membership privacy with the goal of maintaining high model accuracy <cit.>.
However, existing defenses still suffer from the following limitations:
(1) limited privacy protection <cit.>;
(2) large accuracy drop <cit.>;
(3) requiring additional public datasets that may not always be available in practice <cit.>.
To the best of our knowledge, no technique satisfies all these constraints, though they may address individual issues, e.g., high model accuracy but with limited privacy protection <cit.>; or strong privacy but with significant accuracy loss <cit.>.
Our Approach.
This paper proposes a practical defense called that can achieve both High Accuracy and Membership Privacy without requiring additional data.
Existing MIAs employ diverse approaches in inferring membership, e.g., score-based MIAs may exploit prediction loss or entropy <cit.> while label-only MIAs <cit.> can leverage adversarial robustness.
Despite the different manifestations of these attacks, we identify a common exploitation thread among them - they are all learning to distinguish whether the model is overly confident in predicting the training samples via different proxies.
Our defense is therefore to reduce the model's overconfident prediction on training samples while preserving the model's prediction performance, which can simultaneously reduce membership leakage (from different MIAs) and maintain model accuracy.
consists of a training- and testing-time defense.
Training-time defense.
Our key idea is to explicitly enforce the model to be less confident in predicting training samples during training.
We first identify that the prevailing use of hard labels in common training algorithms is one of the main factors that lead to the model's excessive confidence in predicting training samples.
Hard labels assign 1 to the ground-truth label class and 0 elsewhere. The model is trained to produce outputs that match the labels, i.e., near 100% probability for the ground-truth class and 0% otherwise.
On the other hand, a non-member sample that is not seen during training, is usually predicted with lower confidence, and can hence be distinguished by the adversary from member samples.
We therefore propose a new training framework that gets rid of hard labels and instead uses
(1) High-entropy soft labels, which are soft labels with high entropy that assign a much lower probability to the ground-truth class and non-zero probability for other classes. This explicitly enforces the model to make less confident prediction on training samples.
(2) also consists of an entropy-based regularizer, which is to penalize the model for predicting any high-confidence outputs via regularizing the prediction entropy during training.
The proposed training framework is able to significantly reduce the model's overconfident prediction and improve membership privacy, without (severely) degrading the model accuracy. Section <ref> explains how it prevents privacy leakage from different sources (output scores and prediction labels). On the other hand, stronger membership privacy can also be achieved (e.g., by increasing the strength of regularization), but it would be at the cost of accuracy, which is undesirable as both privacy and accuracy are important considerations.
This motivates our testing-time defense, whose goal is to gain higher membership privacy without degrading accuracy.
Testing-time defense.
We propose to uniformly modify all the outputs (from members and non-members) into low-confidence outputs, without changing the prediction labels.
Our idea is to leverage the output scores from the randomly-generated samples, which are often predicted with low confidence due to the high dimensionality of the input space.
In our defense, all the values in each output score are replaced by those from random samples, and we keep the relative ordering of different classes unchanged to maintain the same prediction labels (e.g., a dog image is still predicted as a dog but with different output scores).
Both the high-confidence outputs (on training samples) and low-confidence outputs (on testing samples) are uniformly replaced by such low-confidence outputs from random samples.
This further reduces the membership leakage from the output scores.
Evaluation.
We evaluate on five benchmark datasets (Purchase100, Texas100, Location30, CIFAR100 and CIFAR10), and perform comprehensive evaluation on
a total of nine diverse MIAs (including the state-of-art LiRA attack <cit.>).
We compare with seven leading defenses: AdvReg <cit.>, MemGuard <cit.>, SELENA <cit.>, DMP <cit.>, Label Smoothing (LS) <cit.>, Early-stopping <cit.>, and DP-SGD <cit.>.
An ideal privacy defense should offer strong protection for both members and non-members. Therefore, we follow Carlini et al. <cit.> to use attack true positive rate (TPR) controlled at low false positive rate (FPR), and attack true negative rate (TNR) at low false negative rate (FNR) to evaluate membership privacy.
The former metric evaluates the privacy protection for members, and the latter for non-members.
Contributions. We summarize our contributions below.
0em
* Develop a novel training framework with high-entropy soft labels and an entropy-based regularizer to enforce less confident prediction by the model, which can significantly mitigate diverse MIAs and incur minimal accuracy drop.
* Propose a novel testing time defense technique to modify all the output scores into low-confidence outputs, which further improves membership privacy without degrading accuracy.
* Integrate the training and testing framework as , and conduct rigorous evaluation under a wide range of attacks on five different datasets. We compare against seven leading defenses and show that outperforms existing defenses by achieving a superior privacy-utility trade off.
Fig. <ref> summarizes the results of versus other defenses.
We find that existing defenses often bias towards either privacy (e.g., DP-SGD) or utility (e.g., MemGuard).
In contrast, is able to provide strong membership privacy for both members and non-members, and preserve model accuracy.
reduces the attack TPR @0.1% FPR by 94% and the attack TNR @0.1% FNR by 97% respectively, with only 0.46% accuracy loss on average. This represents a much better privacy-utility trade off than other defenses.
§ BACKGROUND
§.§ Machine Learning Primer
This work focuses on supervised training for classification problem.
A ML model can be expressed as a function F_θ : X → Y, where X ∈ℝ^d denotes the input space and Y ∈ℝ^k the output space, and F is parameterized by weights θ.
During training, the network is given a training set (x, y)∈ D_tr where y is the ground truth label.
y is commonly expressed in the one-hot encoding format, where the ground-truth class is indicated with 1 and 0 elsewhere.
The training objective is to minimize the prediction loss on the training set:
_θ1/|D_tr|∑_x ∈ D_trℒ(F_θ(x), y),
where |D_tr| denotes the size of the training set, and ℒ the prediction loss such as cross-entropy loss.
The model's output F_θ(x) indicates the probability of x belonging to each class with ∑_j=0^k-1F_θ(x)_j =1 that sums up to 1.
To prevent the model from overfitting on the training set, a separate validation set different from D_tr is commonly used to serve as an unbiased proxy of the testing set.
One can use the accuracy on the validation set to assess how good the model will be when evaluated on test data and prevent overfitting.
Hereafter, we refer to F as the trained model F_θ, F(x) as the output score of F on x,
and D_te as the test set.
§.§ Threat Model
Attacker.
Following prior work <cit.>, we assume a black-box adversary who can query the target ML model with any input and observe the prediction output.
The adversary's goal is to infer the membership of the training samples (x, y) ∈ D_tr for a given model F.
Like previous defenses <cit.>, we assume a strong adversary with the knowledge of half of the training members and an equal number of non-members.
Further, we assume the adversary has full knowledge of the defense technique and can therefore train shadow models in the same way as the target model is trained, which facilitates a strong adversary in evaluating the defenses.
Defender.
We assume the defender has a private set D_tr and his/her goal is to train a model that can both achieve high classification accuracy and protect against MIAs.
We do not assume the defender has access to any additional data.
§.§ Membership Inference Attacks
The attack model h(x, y, F(x)) → [0, 1] outputs the membership probability.
We refer to D_tr^A, D_te^A as the set of members and non-members that are known to the adversary.
The adversary's goal is to find a h that can best distinguish between D_tr^A and D_te^A. The empirical gain of the attack can be measured as:
∑_(x,y) ∈ D_tr^Ah(x, y, F(x))/|D_tr^A| +
∑_(x,y) ∈ D_te^A1-h(x, y, F(x))/|D_te^A|
We categorize existing MIAs into score-based and label-only attacks as follows.
§.§.§ Score-based MIAs
This class of attacks either trains an inference model to infer membership <cit.> or computes custom metrics such as prediction loss <cit.> to derive a threshold for distinction.
NN-based attack <cit.> trains an neural network (NN) A, to distinguish the target model's prediction on members and non-members: A: F(x) → [0, 1], x∈ [D_tr^A, D_te^A].
By querying the target model with D_tr^A, D_te^A, the resulting output (F(D_tr^A),1), (F(D_te^A),0) forms the training set for A.
In addition to output scores, other features like the ground-truth labels and prediction loss can also be used to train the inference model.
Loss-based attack <cit.> is based on the observation that the prediction loss on training samples is often lower than that on testing samples, as the loss on training samples are explicitly minimized during training.
Specifically, the adversary can query the target model with D_tr^A, and obtain the average loss on D_tr^A as the threshold τ = - 1/|D_tr^A|∑_(x,y) ∈ D_tr^Aℒ(F_θ(x), y).
Any sample with loss lower than τ is considered as a member.
Entropy-based attack <cit.> leverages that the output score of a training sample should be close to the one-hot encoded label, and hence its prediction entropy should be close to 0, which is lower than that on testing samples.
Prediction entropy of a sample can be computed as -∑_j F(x)_jlog(F(x)_j), where j is the class index.
Modified-entropy-based attack <cit.> is an enhanced version of the entropy-based attack by computing the following metric: -(1-F(x)_y)log(F(x)_y) - ∑_j≠ y F(x)_jlog(1-F(x)_j).
This attack improves by taking into account class-dependent thresholds, as well as the ground truth label y, which is shown to achieve higher attack effectiveness.
Confidence-based attack <cit.> exploits the observation that the prediction confidence on training samples F(x)_y is often higher than that on testing samples.
The attack threshold can be derived similar to the entropy-based attacks, and samples predicted with high confidence are deemed as members.
Likelihood Ratio Attack (LiRA) <cit.> is a state-of-art attack that can successfully infer membership when calibrated at low false positive.
In LiRA, the adversary trains N shadow models, half of which are trained with target sample (called IN models) and the remaining half are trained without the target sample (called OUT models).
It then fits two Gaussian distributions to approximate the output distributions by the IN and OUT models (a logit scaling step on the logit values is taken to ensure the outputs follow a Gaussian).
Finally, LiRA conducts a parametric likelihood-ratio test to conduct membership inference (e.g., a sample is deemed as a member if its output is estimated to come from the IN models with high probability).
§.§.§ Label-only MIAs
These attacks exploit training members' higher degree of robustness to different perturbations (like adversarial perturbations, random noise), and develop different proxies to distinguish the degree of robustness by members and non-members.
Prediction-correctness attack <cit.> is the baseline label-only attack that simply determines any samples that are correctly classified as members.
This attack is effective when the training accuracy is higher than the testing accuracy.
Boundary attack <cit.> is based on the observation that it is easier to perturb a testing sample to change the prediction label than a training sample.
This is because testing samples are often closer to the decision boundary and therefore more susceptible to perturbations.
Using common attacks such as CW2 attack <cit.>, the adversary measures the magnitude of perturbation needed to perturb x ∈ [D_tr^A, D_te^A], based on which τ can be derived.
A sample is deemed as a member if the amount of perturbation needed to change the prediction label is higher than τ (i.e., more difficult to be perturbed).
The adversary can also inject random noise to the samples (instead of adversarial perturbations), which is more efficient and useful in the cases where constructing the adversarial sample is difficult (e.g., for inputs with binary features) <cit.>.
Augmentation attack <cit.> makes use of the samples' robustness to data augmentation and the idea is that training samples are often more resilient to data augmentation than testing samples.
For instance, if an image was used to train a model, it should still be classified correctly when it is slightly translated.
For each input x, the adversary first generates multiple augmented versions of x, and computes how many of them are correctly classified.
Based on the classification outcome, the adversary trains an attack inference model to predict whether or not x is a member.
§.§ Defenses against MIAs
This section presents an overview of representative defenses against MIAs (a comprehensive survey of existing defenses is in Section <ref>).
Adversarial regularization (AdvReg) <cit.> trains the model to both achieve good model performance and protection against a shadow MIA adversary.
During training, the defender first trains an attack inference model that tries to maximize the MIA gain, after which the protected model is trained to minimize the MIA gain and maximize the classification accuracy. This is instantiated as a min-max game in <cit.>.
Distillation for membership privacy (DMP) <cit.>.
Shejwalkar et al. propose DMP to defend against MIAs based on knowledge distillation.
The idea is to distill the knowledge from an undefended model (trained on a private dataset) into a new public model using a new reference set.
Privacy protection is enabled by thwarting the access of the public model to the private dataset as the public model is trained on a separate reference set.
Such a reference set can be curated by assuming the availability of a public dataset or by using synthetic data. We consider the latter since we do not assume access to external data.
This is because in many domains such as healthcare, the training data is private/proprietary, and thus such a public dataset may not be available.
We hence consider a more realistic scenario in which the defender has no access to external data (similar to <cit.>).
SELf ENsemble Architecture (SELENA) <cit.>.
SELENA also uses knowledge distillation.
Its key idea is to partition the private dataset into different subsets and train a sub model on each of the subset (another technique with similar idea is proposed in <cit.>).
For each sub model, there exists a subset of private dataset that was not used in its training, i.e., “reference set” for that sub model.
Each sub model assigns the output scores on its “reference set”, which constitutes the knowledge to the distilled.
The knowledge from the ensemble of sub models is finally distilled into a new public model.
Early stopping <cit.>.
As the training proceeds, the model tends to overfit the training data and become susceptible to MIAs.
Early stopping is a general solution in reducing overfitting <cit.> by training models with fewer epochs.
Song et al. <cit.> find that this is useful in mitigating MIAs and we follow to include it as as a benchmark defense mechanism.
Differential privacy (DP) based defenses <cit.>.
DP-based defenses leverage the formal framework of differential privacy to achieve rigorous privacy guarantee.
This is done via injecting noise to the learning objective during training such as DP-SGD that adds noise to the gradients <cit.>.
However, DP-based defenses often produce models with considerable accuracy drop, resulting in a poor privacy-utility tradeoff.
MemGuard <cit.>.
Jia et al. propose to defend against MIAs via obfuscating the prediction scores.
The idea is to fool the MIA adversary by constructing a noise vector to be added to the input (analogous to constructing adversarial samples), and make the outputs on members and non-members indistinguishable by the adversary.
Label Smoothing <cit.>. LS is a common regularization technique to improve model accuracy by using soft labels. LS replaces the one-hot label with a mixture of the one-hot label and uniform distribution using a smoothing intensity parameter. E.g., for a smoothing intensity of 0.3, the soft label becomes 80% cat, 10% dog, 10% frog; and a smoothing intensity of 0.6 yields 60% cat, 20% dog, 20% frog.
LS trains with different smoothing intensities to produce model with high accuracy.
Both LS and use soft labels in their training, but they are two techniques built with different principles that require different soft labels.
LS is used to improve model performance, which necessitates training with low-entropy soft labels.
Unlike LS, consists of high-entropy soft labels, an entropy-based regularizer and a novel testing-time defense (details in the next section), which is to improve membership privacy while preserving model accuracy.
This consequently results in the different privacy implications by the two techniques: LS improves model performance but the resulting model still suffers from high MIA risk <cit.>, while consistently contributes to very low MIA risk.
We refer to detailed comparison in Section <ref>.
§ METHODOLOGY
The main insight behind in mitigating diverse MIAs is to identify a common exploitation thread among different MIAs. is designed to overcome this exploitation so that it can defend against different MIAs regardless of their specific approaches.
We first explain how existing MIAs can be unified via a common thread in Section <ref>, and then discuss how we build to overcome this exploitation.
§.§ Overconfident Prediction Leads to Membership Leakage
While existing MIAs employ diverse approaches to infer membership,
we unify them by viewing them all as exploiting the model's overconfidence in predicting training samples.
We explain below how different attacks can be viewed as different forms to quantify whether a model is overly confident in predicting a specific sample, in order to infer its membership.
Score-based MIAs leverage the prediction scores to infer membership through different proxies.
The model's overconfident prediction on training samples can be exposed through high confidence scores <cit.>, low prediction entropy <cit.>, low prediction loss <cit.>, or using a neural network <cit.>.
For boundary and augmentation attacks, samples predicted with high confidence can be viewed as exhibiting high robustness against adversarial perturbations and data augmentation.
Training samples can therefore be identified by the adversary based on whether they are more resilient to adversarial perturbation <cit.> or data augmentation <cit.>.
What leads to the model's overconfidence in predicting training samples?
As mentioned before, common training algorithms make use of the one-hot hard labels to minimize the prediction loss.
Minimizing the training objective function (<ref>) is equivalent to encouraging the model to produce outputs that are consistent with the labels, i.e., 100% for the ground-truth class and 0% for any other classes.
While training with hard labels has achieved success in a broad class of classification problems, we find that it undesirably contributes to the model's overconfidence in predicting training samples, which eventually leads to membership leakage.
For example, on Purchase100, the difference between the average prediction confidence on training and testing samples is >25%, which means the model is much more confident in predicting training samples.
Such differential behavior can be identified by the adversary to obtain >14% attack TPR @0.1% FPR.
This indicates training with one-hot hard labels undesirably enables the adversary to identify a large fraction of training samples with very low false positives (and similarly identifying testing samples with low false negatives).
This inspires our design principle of enforcing less confident prediction to mitigate MIAs, based on which we introduce a novel training and testing defense that can achieve both strong membership privacy and high model accuracy.
§.§ Overview
Fig. <ref> shows an overview of . It has two parts.
Training-time defense.
Inspired by the observation in Section <ref>, our main idea is to train the model to produce less confident prediction even on training samples, thereby enforcing the model to behave similarly on training and testing samples.
We achieve this by two innovations: (1) replacing the hard labels with high-entropy soft labels; and (2) introducing an entropy-based regularizer.
The first step is to generate soft labels with high entropy from the hard labels.
These high-entropy soft labels explicitly induce the model to produce less confident output during training by assigning a much lower probability for the ground-truth class.
For instance, a hard label of [0, 1] can be changed into a soft label of [0.4, 0.6], which guides the model to predict the ground-truth class with 60% probability (instead of 100%).
The probability of each class is determined by an entropy threshold parameter, and a higher threshold generates a soft label with higher entropy (e.g., [0.5, 0.5] has the highest entropy) - details in the next section.
The ground-truth class remains the same so that the model can learn correctly, e.g., a dog image is still trained to be predicted as a dog.
Second, we introduce an entropy-based regularizer to penalize the model for predicting any output with low entropy.
Prediction entropy measures the prediction uncertainty, and can be used to regularize the confidence level of the prediction, e.g., low entropy indicates high-confidence output, and can be mitigated by the proposed regularizer to become low-confidence output.
The high-entropy soft labels encourages the model to produce outputs consistent with the labels, while the regularization term allows the model to produce any low-confidence outputs, even if the outputs do not closely match the labels.
Both components are important for to mitigate overconfident prediction and achieve strong membership privacy.
How 's training-time defense mitigates membership leakage from different sources?
There are two sources leading to membership leakage, and we discuss below how can reduce leakage from both sources.
Output scores.
With the high-entropy soft labels and entropy-based regularizer, forces the model to produce output scores on training samples with higher entropy (i.e., lower confidence), which resemble the output scores on testing samples.
E.g., on Purchase100, the average prediction entropy on members and non-members are 0.389 and 0.576 on the undefended model, which are 4.485 and 4.490 on the model.
therefore reduces the entropy difference by 31x (from 0.187 to 0.006) and effectively enforces the output scores on members and non-members to be indistinguishable (more details in Appendix <ref>).
Some score-based MIAs leverage both output scores and label information (e.g., <cit.>) and we explain next how prevents membership leakage from the labels.
Prediction labels.
's training-time defense mitigates privacy leakage from the prediction labels by pushing the training samples closer to the decision boundary, so that training samples lie similarly close to the decision boundary as the testing samples.
We next use the boundary and augmentation attacks to explain (both attacks exploit label information in different manners to infer membership).
Boundary attack exploits the training samples' higher adversarial robustness than testing samples.
Without , the adversary can discern that the training samples require more perturbations than the testing samples.
With however, training samples are predicted with lower confidence, and therefore it takes a similar amount of perturbation to perturb training and testing samples.
For instance, on CIFAR100, the average amount of perturbation to perturb the training samples on the undefended model is 0.342, and 0.226 on the testing samples.
With , the perturbation on the training samples become 0.289 and 0.234 on the testing samples, which effectively reduces the perturbation difference between training and testing samples by >53%.
This means the members and non-members become indistinguishable from the perspective of their adversarial robustness.
Augmentation attack exploits the training samples' higher resistance to data augmentation, i.e., the augmented variants of training samples are more likely to be classified correctly.
Performing data augmentation on the original samples can be viewed as drawing neighboring variants around the original samples in the sample space.
Since the training samples are closer to the decision boundary under , their augmented variants are more likely to cross the decision boundary, and hence be classified incorrectly, which is similar to how testing samples would behave.
We also evaluate the model's performance on the inputs added with random augmentations. We find mainly reduces the performance on the augmented training samples (from 64.38% to 55.12%), and the performance on the augmented testing samples remain similar before and after (46.12% and 46.36%). This reduces the accuracy difference between members and non-members from 18.26% to 8.76% (a 52% reduction), and enables them to exhibit similar resistance to data augmentation.
's training-time framework is able to reduce the model's overconfident prediction on training samples without compromising the model's performance, i.e., strong membership privacy and prediction accuracy.
Nevertheless, membership privacy can be further improved such as by pushing the training samples closer to the decision boundary, but at the cost of accuracy, which is undesirable.
In light of this, we introduce a testing-time output modification defense that can attain higher membership privacy without degrading accuracy.
Testing-time defense.
Our idea is to modify all the output scores to become low-confidence scores, hence making the output scores from members and non-members less distinguishable.
The key observation that underpins the testing-time defense is that randomly-generated samples are often predicted with low confidence, and the low-confidence output scores can be used for output modification.
Specifically, we first uniformly generate random samples, which are highly unlikely to be part of the training set due to the high dimensionality of the input space (e.g., the entire Texas100 dataset contains only 67,330 samples while the input space has 2^6170 samples).
As these random samples are unlikely to be members of the training set, they are often predicted by the model with low confidence.
We then replace all the entries in each output score with those from random samples, where the replacement is to keep the predicted labels unchanged (all top-k labels) and modify the output scores only.
In essence, returns only the ordering of the confidence scores and the ordering is represented by the random output scores arranged in a specific order.
The random samples do not have any prerequisites (e.g., they do not need to come from a specific distribution, nor do they need to produce a specific prediction label), as long as they are valid inputs (e.g., pixel values are in [0, 255]).
In , the high-confidence outputs on members and low-confidence outputs on non-members, all become low-confidence outputs after being modified.
This significantly increases the difficulty for the adversary to identify differential behaviors on members and non-members.
In Section <ref>, we perform detailed ablation study to show that all three defense components in are crucial in achieving strong membership privacy and preserving high model accuracy.
We next explain in details.
§.§ Training-time Defense
Generating high-entropy soft labels.
The first step is to generate high-entropy soft labels for training, where the class probabilities in the soft labels are controlled by an entropy threshold parameter, denoted as γ.
The entropy of a soft label y' can be calculated as:
ℍ(y') = - ∑_j=0^k-1 y'_j*log(y'_j)
A soft label with uniform probability on each dimension has the highest entropy, based on which we choose a smaller entropy threshold.
For a k-class classification problem, our goal is to find a y' given γ such that,
ℍ(y') ≥γℍ(y), y={1/k, ... 1/k}^k, γ∈ [0,1],
where y' has the highest probability on its ground-truth class, and the probabilities on the remaining dimension are the same.
For a hard label y whose ground-truth class is j_truth (k classes in total), the resulting soft label becomes:
∀ y'_j∈ y', y'_j=
p if j=j_truth
(1-p)/(k-1) if j≠ j_truth
p is the probability on the ground-truth class, and a larger γ indicates higher prediction entropy, which leads to a smaller p (i.e., smaller probability on the ground-truth class).
Entropy-based regularization.
In addition, we introduce an entropy-based regularizer that measures the prediction entropy during training, and penalizes predictions that have low entropy, as such predictions indicate high-confidence output and may be exploited by the adversary.
Finally, the overall training objective can be formulated as:
ℒ_𝒦ℒ(F_θ(x), y) = ∑_j=0^k-1 y_jlog(y_j/F_θ(x)_j),
_θ [ℒ_𝒦ℒ( (F_θ(X_tr), Y^'_tr), θ) - αℍ(F_θ(X_tr))],
where Y^'_tr is the high-entropy soft labels, L_KL the Kullback-Leibler divergence loss, α is to control the strength of regularization.
Our goal is to train the model to mitigate the overconfident prediction on training samples while maintaining high prediction accuracy.
We achieve this by using a large γ to train the model with soft labels in high entropy, and a α to regularize the prediction entropy.
Section <ref> explains how to select the parameters γ, α in (p in Equation <ref> is determined by γ).
§.§ Testing-time Defense
The testing-time defense uniformly modifies the runtime outputs to achieve stronger privacy without jeopardizing accuracy.
We first generate uniform random samples x_rand, e.g., for Purchase100 with binary features, each feature is assigned with 0 or 1 with equal probability.
For each runtime input x ∈ [D_tr, D_te], all the entries in F(x) (that indicate the probability for each class) are replaced by those in F(x_rand), the resulting output is denoted as F^x_rand(x).
The replacement is to only modify the entries in F(x) while ensuring F(x) and F^x_rand(x) give the same prediction labels.
For example, let x ∈ [D_tr, D_te], F(x) = [0.85, 0.05, 0.1], and x' ∈ X_rand, F(x') = [0.2, 0.3, 0.5], then the final output produced by the model becomes: F(x_i) = [0.5, 0.2, 0.3].
This enforces the model to produce low-confidence outputs on both members and non-members, and reduces privacy leakage.
Overall Algorithm.
Algorithm <ref> gives the overall algorithm of .
γ and α are the two parameters in to regulate the confidence level of the model's prediction, e.g., a high entropy threshold or strong regularization can enforce the model to become less confident in prediction.
Line 2 generates a template of high-entropy soft labels of y', which is then used to generate soft labels for each of the hard labels.
The condition
in Line 3 ensures that the ground-truth labels remains unchanged so that the model can learn the correct labels.
At test time, each output is replaced by those from a random sample.
The condition of argsort(F^x_rand(x))=argsort(F(x)) in line 13 is to ensure both F^x_rand(x) and F(x) give the same labels (all top-k labels and not just the top-1 label).
Line 11 and Line 12 are independent of each other, and hence can be executed independently to facilitate faster runtime inference (overhead evaluation in Appendix <ref>).
§ EVALUATION
§.§ Experimental Setup
Datasets.
We consider five common benchmark datasets, and we describe them below.
Purchase100 <cit.> includes 197,324 shopping records of customers, each with 600 binary features indicating whether a specific item is purchased.
The goal is to predict the customer's shopping habits (100 different classes in total).
Texas100 <cit.> contains 67,330 hospital discharge records, each containing 6,170 binary features indicating whether the patient has a particular symptom or not.
The data is divided into 100 classes, and the goal is to predict the treatment given the patient's symptoms.
Location30 <cit.> contains the location “check-in” records of different individuals. It has 5,010 data records with 446 binary features, each of which corresponds to a certain loation type and indicates whether the individual has visited that particular location. The goal is to predict the user's geosocial type (30 classes in total).
CIFAR100 <cit.> is an image classification dataset that has 60,000 images in 100 object classes. Each image has a size of 32×32×3.
CIFAR10 <cit.> is similar to CIFAR100 that also contains 60,000 images but with 10 different object classes.
We follow <cit.> to use the fully-connected (FC) networks on Purchase100, Texas100 and Location30, and a DenseNet-12 <cit.> on CIFAR100 and CIFAR10 (Appendix <ref> conducts evaluation on more network architectures, including ResNet-18 <cit.>, MobileNet <cit.> and ShuffleNet <cit.>).
Purchase100 is trained with 20,000 samples, Texas100 with 15,000 samples, Location30 with 1,500 samples, CIFAR100 and CIFAR10 are with 25,000 samples. Section <ref> reports additional experiments on more training sizes (from 2,500 to 50,000).
Attacks.
We consider all nine attacks as in Section <ref>.
For NN-based attack, we use the black-box NSH attack from Nasr et al. <cit.>, which uses the model loss, logit values from the target model, and the ground-truth label to train an attack inference model.
We consider the loss-based attack from Yeom et al. <cit.> and confidence-, entropy- and modified-entropy-based attacks as in Song et al. <cit.>.
For LiRA <cit.>, we train 128 shadow models for each defense (64 IN and OUT models each), where each shadow model is trained following the same procedure as the targeted defense (as per our threat model).
E.g., for , this means the shadow model is trained with the same high-entropy soft labels and the entropy-based regularization as the defense model, and the shadow model also performs the same output modification as does.
We consider the boundary and augmentation attacks from Choquette et al. <cit.>.
For the boundary attack on the two image datasets, we use the CW2 attack <cit.> to generate adversarial samples and derive the perturbation magnitude threshold to distinguish members and non-members.
Likewise, for the other three non-image datasets that contain binary features, we compute the sample's robustness to random noise instead of adversarial perturbation.
For each sample x, we generate hundreds of noisy variants of x, and the number of correctly classified noisy variants of x is used to determine a threshold that best distinguishes between members and non-members.
For augmentation attack, we consider image translation as the augmentation method, and we similarly consider different degrees of translation to find the best attack.
configuration.
γ, α are the two parameters in configuring (for generating high-entropy soft labels and controlling the strength of regularization respectively).
We perform grid search to select the parameters (γ∈ [0.5, 0.99], α∈ [0.0001, 0.5]), and select the one with small train-validation gap and high validation accuracy.
We also conduct evaluation to study how 's performance varies under different parameters (please see Appendix <ref>).
For the testing-time defense, we generate random samples (e.g., random pixels in [0, 255]) and perform output modification as in Section <ref>. There are no any other requirements.
Our code is available at <https://github.com/DependableSystemsLab/MIA_defense_HAMP>.
Related defenses.
We consider seven major defenses: AdvReg <cit.>, MemGuard <cit.>, DMP <cit.>, SELENA <cit.>, Early stopping <cit.>, Label Smoothing (LS) <cit.> and DP-SGD <cit.>.
We follow the original work to set up the defenses unless otherwise stated (more details in Appendix <ref>).
Evaluation metrics.
An ideal privacy defense should provide strong protection for both members and non-members, for which we follow the best practice <cit.> to consider (1) attack true positive rate (TPR) evaluated at 0.1% false positive rate (FPR), which evaluates the protection for members, and (2) attack true negative rate (TNR) at 0.1% false negative rate (FNR), which quantifies the protection for non-members.
Result organization.
Table <ref> reports the model accuracy for every defense.
Fig. <ref> compares each defense in terms of their membership privacy and model utility.
Each defense is evaluated with multiple attacks, and we report the ones that achieve the highest attack TPR or TNR (detailed results for each attack are in Appendix <ref>).
Fig. <ref> presents the average attack AUC (area under curve) by each defense, and the full ROC curves are in Appendix <ref>.
We leave the comparison with early stopping in Appendix <ref> due to space constraint.
Section <ref> presents an ablation study, and Appendix <ref> reports training and inference overhead evaluation.
We next discuss the results by comparing with other defenses.
§.§ Comparison with Undefended Models
significantly reduces the MIA risk against both members and non-members.
Compared with the undefended models, achieves significantly lower attack TPR and TNR.
The average attack TPR on the undefended model is 13.48%, which is reduced to 0.8% by (a 94.1% reduction).
Similarly, reduces the attack TNR by 97%, from 19.89% to 0.59%.
This effectively thwarts the adversary in inferring members or non-members from the target model.
In addition, we find that NN-based attack yields the highest attack TPR on the undefended models in many cases (as in Fig. <ref>), and we explain the reason in Appendix <ref>.
achieves strong membership privacy while preserving high model accuracy.
Across the five diverse datasets, is able to consistently produce models with comparable accuracy as the undefended models.
has an accuracy drop of at most 1.1% (on Location30), and the average accuracy drop by is only 0.46%.
§.§ Comparison with MemGuard <cit.>
Both MemGuard and are capable of preserving model accuracy.
MemGuard does not incur any accuracy drop since it is a post-processing technique, and does not change the prediction label.
Likewise, only incurs a minor accuracy drop of 0.46%.
achieves considerably stronger membership privacy than MemGuard.
MemGuard offers very limited privacy protection because MemGuard only modifies the output scores without changing the prediction labels, which cannot prevent privacy leakage from the label information.
On the contrary, consists of a training-time defense that can mitigate membership leakage from both output scores and label information (explained in Section <ref>), and achieves much stronger membership privacy than MemGuard.
The average attack TPR on MemGuard is 6.7%, which is 8.4x relative to that of .
Similarly, the attack TNR by MemGuard is 10.9%, which is 18.3x relative to that of .
§.§ Comparison with AdvReg <cit.>
outperforms AdvReg with higher model accuracy and stronger membership privacy.
In terms of accuracy, consistently achieves higher accuracy than AdvReg.
AdvReg incurs an average 7.45% accuracy drop, while incurs only 0.46% (94% lower than AdvReg).
In terms of privacy, outperforms AdvReg with both much lower attack TPR and TNR.
The attack TPR is 1.70% with AdvReg and 0.8% with , which translate to a 87% and 94% reduction from those of the undefended models.
Similarly, AdvReg reduces the attack TNR by 90% while reduces it by 97%, which is much higher.
§.§ Comparison with DMP <cit.>
DMP <cit.> uses generative adversarial networks (GANs) trained on the private dataset to produce synthetic data as the reference set for knowledge distilation.
We follow Shejwalker et al. <cit.> to train the two image datasets on DC-GAN <cit.>.
The defender can generate unlimited data from the GAN, and hence he/she can create a reference set that is larger than the original training set.
Therefore, we use 150K synthetic samples to train the model with higher accuracy (we do not consider more synthetic images as the improvement is negligible).
For the three datasets with binary features, we use CTGAN <cit.> for modeling tabular data.
We use 100K synthetic samples for Texas100, 10k for Location30. We do not consider Purchase100 as it incurs significant accuracy drop (over 30%).
To validate that synthetic samples are useful for the domain task, we compare the performance of the models trained with GAN-generated synthetic data and those with random data (i.e., all features are randomly selected as 0 or 1 with equal probability) using Texas100.
We find that models trained with random data only achieve accuracy from 5.8% to 14.8%; while those with GAN-generated data achieve over 40% accuracy.
outperforms DMP by being able to consistently achieve strong privacy protection with high model accuracy across different datasets.
In terms of membership privacy, we find that DMP is able to achieve strong results in many (but not all) cases, and it achieves an average attack TPR of 0.44% and TNR of 0.38% on Texas100, CIFAR100 and CIFAR10, where achieves 0.9% TPR and 0.65% TNR (DMP is slightly better).
However, DMP's performance does not generalize across datasets.
For instance, on Location30, DMP suffers from a much higher attack TPR of 7.26% and TNR of 23.33%.
This is because the model is trained with limited data (1,500), and the GAN is not able to generate diverse data that are different from the original training data. As a result, the teacher model assigns high confidence to the synthetic data, from which the student model learns to predict the training members with high confidence that eventually leads to high MIA risk.
To validate this, we compare the difference between the prediction confidence on members and non-members by the DMP models. On Location30, the average difference is >30%, and only <5% on the other datasets, which is why DMP exhibits poor privacy protection on Location30.
On the same dataset, yields a low TPR of 0.89% and TNR of 0.59%, and this trend is consistent across datasets.
In terms of accuracy, DMP suffers from different degrees of accuracy loss that are much higher than 's.
DMP incurs >30% accuracy loss on Purchase100 (as mentioned earlier), ∼12% accuracy drop on Texas100 and CIFAR100, 3.1% on Location30, and 1.2% on CIFAR10 (smaller accuracy loss as CIFAR10 has 10 classes only).
In contrast, incurs average accuracy drop of <0.5% (at most 1.1%), which is significantly better than DMP.
§.§ Comparison with SELENA <cit.>
Both SELENA and achieve similarly strong privacy protection.
On average, has a slightly better membership privacy than SELENA, but neither technique has consistently better membership privacy overall (Fig. <ref>).
The attack TPR of SELENA is 0.53%∼1.72%, with an average of 1.1%, and that of is 0.4%∼1.2%, with an average of 0.8%.
They are able to reduce the attack TPR by 92% (SELENA) and by 94% ().
In addition, the attack TNR of SELENA is 0.42%∼3.7%, with an average of 1.7%, and that of is 0.44%∼0.77%, with an average of 0.6%.
This translates to a TNR reduction of 91% (SELENA) and 97% (), respectively.
While providing comparable privacy benefits, outperforms SELENA by having lower accuracy loss, hence providing a better privacy-utility trade off.
The largest accuracy drop by SELENA is 4.4% and that by is only 1.1%.
On average, SELENA incurs a 2.25% accuracy drop, while incurs a much smaller drop of 0.46%.
Moreover, our additional experiment in Section <ref> shows that continues to outperform SELENA with much lower accuracy drop when evaluated on a variety of different training sizes (2.2%∼5.2% by SELENA and 0.04%∼0.98% by ).
§.§ Comparison with Label Smoothing (LS) <cit.>
Though LS is able to improve model accuracy, the model trained with LS still suffers from high MIA risk.
In contrast, the model trained with can maintain high model accuracy and exhibit very low MIA risk.
For LS, we follow prior work by Kaya et al. <cit.> to train with different smoothing intensities from 0.01 to 0.995, and select the model with the highest accuracy (we omit CIFAR10 and Location30 as LS did not lead to accuracy improvement).
We first discuss the qualitative difference between LS and , and then quantitatively compare their privacy risk.
While LS and use soft labels in their training, they are built with different purposes that require different soft labels.
LS is used as a regularization technique to improve model accuracy, which necessitates training with low-entropy soft labels, and is able to increase the accuracy by 2.4% on average.
However, the resulting model still suffers from high MIA risk, as LS causes the model to overfit on the smooth labels and exhibit discernible behavior on the training samples <cit.>.
In contrast, is built to improve membership privacy, which consists of high-entropy soft labels, an entropy-based regularizer and a novel testing-timd defense to force the model to make less confident predictions, and to behave similarly on the training and testing samples.
To quantitatively compare the different soft labels used by both techniques, we measure the soft label entropy in LS and , and find that the label entropy in is considerably higher than that in LS, and is 4x∼50x relative to that in LS (average 9x).
This contributes to the low membership privacy risk by , unlike LS.
The average attack TPR on the LS models is 5.1%, 7.1x relative to that by (on the same datasets).
The attack TNR on LS is 6.3x relative to that by (we observe a similar trend even when we train LS with other smoothing intensities that have comparable accuracy improvement - see Appendix <ref>).
Moreover, our results reveal that LS may amplify the MIA risk and render the model more vulnerable than the undefended model.
On Texas100, LS increases the attack TPR from 3.87% (on the undefended model) to 5.61%, which increases the MIA risk against training members by 45%.
This suggests that LS may constitute a hidden privacy risk for the practitioners (a similar finding was identified recently by Kaya et al. <cit.>).
On the contrary, consistently leads to low MIA risk and outperforms LS with significantly better membership privacy.
§.§ Comparison with DP-SGD <cit.>
We use the canonical implementation of DP-SGD using Pytorch Opacus <cit.>.
We first consider a fixed privacy budget ϵ=4 as per Tang et al. <cit.>, and then evaluate DP-SGD with different values of ϵ.
§.§.§ DP-SGD with fixed ϵ=4.
In this setting, the average attack TPR of the DP-SGD models is 0.36% and 0.3%, both of which are the lowest among all the defenses we evaluated. In comparison, yields 0.8% attack TPR and 0.6% TNR, which are slightly higher than DP-SGD.
However, DP-SGD suffers from considerable accuracy loss, with an average loss of 23.84%, while a significantly smaller loss of 0.46%.
§.§.§ DP-SGD with different ϵ.
We next evaluate DP-SGD by considering different noise_multipliers and clipping norms.
We consider Purchase100, on which we used a noise_multiplier of 1.7 and a clipping norm of 1, for ϵ=4 in the earlier evaluation.
We select different noise_multiplier values of 0.0 (no noise injected), 0.1 (ϵ=12069.1), 0.5 (ϵ=62.5) and 0.9 (ϵ=10.9); and clipping norm values of 1, 5 and 10, totalling 12 different configurations.
We report the results in Fig. <ref>.
Reducing the amount of injected noise and using a larger clipping norm allows DP-SGD to provide empirical privacy protection (but with a very large provable bound of ϵ), and reduce the amount of accuracy loss.
For instance, by using a clipping norm of 10 without injecting any noise, DP-SGD is able to reduce the accuracy loss to be <1%, which can also reduce the attack TPR by 73% (from 14.37% to 3.86%), and the attack TNR by 36% (from 14.62% to 9.36%).
Nevertheless, this performance is still considerably inferior to that of , which can reduce the attack TPR and TNR by 97.2% and 96.7%, respectively.
Using a tighter clipping norm or injecting more noise can improve the membership privacy even more, but this comes at the cost of accuracy loss (the earlier result has negligible accuracy loss).
For example, by using a small clipping norm of 1, the attack TPR can be reduced to 0.67% and attack TNR to 0.62%.
However, this results in 8.2% accuracy loss.
Increasing the noise_multiplier can further reduce privacy leakage, e.g., using a noise_multiplier value of 0.5 can reduce the attack TPR to 0.5% and attack TNR to 0.49% (and with a large ϵ of 62.5), which are comparable to the 0.4% TPR and 0.44% TNR values by .
However, DP-SGD degrades the accuracy by 13.6%, while incurs negligible accuracy drop.
Therefore, training a model with a small amount of noise or with a tight clipping norm is also a viable defense against MIAs, though it still incurs much larger accuracy loss than and results in large provable bounds ϵ.
§ DISCUSSION
§.§ Ablation Study
consists of three components, and we perform a detailed ablation study to investigate the effectiveness of each of these components - this includes a total of six configurations. We present the results in Table <ref>.
The second to fourth rows in Table <ref> shows the results on models using a single component in .
For instance, training with high-entropy soft labels alone is able to produce a model with similar accuracy as the undefended model (trained with the one-hot hard labels), and reduce the attack TPR from 14.37% to 4.76%, and attack TNR from 14.62% to 4.22%.
This also validates our earlier observation in Section <ref> that training with one-hot hard labels could lead to high MIA risk, and the proposed high-entropy soft labels can be used to mitigate the high MIA risk.
However, this is not enough as the model still suffers from relatively high TPR and TNR. We observe similar trends in the other two settings where we either train with the entropy-based regularizer alone, or directly perform output modification on the undefended model.
Strengthening the model with more defense components can further reduce the MIA risk while preserving model accuracy. For example, training with high-entropy soft labels and the entropy-based regularizer (fifth row in Table <ref>) achieves a low TPR of 1.86% and a low TNR of 1.07%.
We observe a similar trend even if we change to different configurations, as in the sixth and seventh rows in Table <ref>, both of which exhibit better privacy protection than models equipped with a single component.
Furthermore, we find that the resulting model continues to maintain high model accuracy, which means the different defense components in can be used together to improve membership privacy without jeopardizing model accuracy.
Finally, the full defense consisting of all three defense components, as in , exhibits the best privacy protection while maintaining competitive model accuracy.
§.§ Evaluation on Different Training Sizes
This section reports additional experiments where we vary the size of the training set.
We evaluate six more different sizes on Purchase100, which is the largest dataset in our evaluation and allows us to comprehensively evaluate a wide range of sizes, namely: 2,500, 5,000, 7,500, 10,000, 30,000, 50,000 (up to 20x difference).
We trained 64 shadow models in the LiRA attack for each defense, with over 2,300 different shadow models in total.
Fig. <ref> shows the results.
We find that even when evaluated under a broad range of training sizes, consistently achieves superior performance on both privacy protection and model utility.
The average attack TPR on the undefended model is 24.7% and attack TNR 22.9%.
MemGuard achieves an average attack TPR of 13% and attack TNR 17.4%, both of which are significantly higher than the 1.3% and 1.5% by .
AdvReg incurs an average accuracy loss of 6.3% while incurs only 0.2%.
also outperforms AdvReg with better privacy protection: AdvReg reduces the attack TPR by 83% and attack TNR by 76.1%, while reduces them by 94.8% and 93.4%, respectively.
LS improves the accuracy by 3.2%, but it still suffers from high MIA risk: its attack TPR and TNR are 8x and 4.1x relative to that of .
Both SELENA and have similarly strong membership privacy: the average attack TPR on SELENA is 1.2%, and 1.3% on ; the attack TNR are 1.4% and 1.5%, respectively.
Under a similar privacy protection, still outperforms SELENA with a much lower accuracy drop.
On average, SELENA degrades the accuracy by 3.97% (up to 5.2%), while degrades accuracy by only 0.15% (up to 0.98%).
§.§ Evaluation against Data-poisoning-based MIA <cit.>
Recent work by Tramer et al. <cit.> shows that a more capable adversary can significantly amplify the MIA risk through data poisoning.
Therefore, we conduct additional evaluation on whether can protect against such more capable attack.
The Tramer et al. attack increases the membership leakage against target points, by poisoning the training set to transform the target points into outliers.
Each target point is replicated n times with a wrong label, and these replicas are added as the poison samples.
If the target point is a member in the training set, the model will be fooled into believing that the correctly-labeled target point is “mislabeled” (due to the presence of other poisoned replicas), which would have a large influence on the model's output and can be identified by the adversary.
We follow <cit.> to conduct the evaluation on CIFAR10, and select 250 random target points (containing both members and non-members), each replicated 8 times.
We train 128 shadow models, which include a total of 32,000 target points.
Without data poisoning, the adversary achieves 8.23% attack TPR and 10.15% attack TNR on the undefended model.
These are increased to 52.44% and 24.52% after data poisoning, respectively.
Even under such a powerful attack, is able to reduce the attack TPR from 52.44% to 0.34%, and attack TNR from 24.52% to 0.71%. Further, achieves such strong protection with a negligible accuracy drop of 0.6%.
§.§ Limitation
First, it requires re-training and hence incurs additional training overhead.
Nevertheless, re-training is commonly required by many existing defenses <cit.>, and training is a one-time effort prior to deployment.
Further, our evaluation shows that incurs only a modest training overhead compared with other defenses (see Appendix <ref>).
The second limitation is that 's testing-time defense incurs an overhead in every inference, which may be undesirable for the computations that have stringent real-time constraints.
Nevertheless, incurs a low latency of only 0.04∼0.38ms per inference. In comparison, MemGuard, the other defense that also contains post-processing modification, introduces a latency of 335.42∼391.75ms.
In addition, this process also changes the output scores to be randomized scores, which may affect the usefulness of the output scores. Nevertheless, we try to reduce the impact by ensuring the prediction labels derived from the output scores remain unchanged (all top-k labels), and thus the model accuracy is unaffected. This can still provide meaningful information in the output scores without leaking membership privacy.
Finally, though empirically provides superior privacy-utility tradeoff, it does not offer provable guarantees.
This is a limitation common to all practical defenses <cit.>.
Hence, a more capable adversary may mount stronger attacks, such as the data poisoning attack by Tramer et al. <cit.>.
Our preliminary evaluation shows that still exhibits strong privacy protection and preserves model accuracy even under the presence of such a data-poisoning adversary, but we leave further investigation to future work.
§ RELATED WORK
Membership inference attacks.
Depending on the adversary capabilities, MIAs can be divided into black-box <cit.> and white-box attacks <cit.>.
The former has access only into the output of the target model while the latter has visibility into information such as the internal model gradients to facilitate membership inference.
Black-box MIA assumes a more realistic adversary, and hence is hence widely adopted in prior defense studies <cit.> (and in ).
Such attacks can be mounted by either shadow-training <cit.> or computing statistical metrics based on the partial knowledge of the private dataset <cit.>.
Many of those attacks require full or partial access to the output scores by the model, and may be defeated if the model only reveals the prediction label.
This motivates a new class of attacks called, label-only attacks, which can be launched either with <cit.> or without <cit.> partial knowledge of the membership information.
Carlini et al. <cit.> introduce the LiRA attack that can succeed in inferring membership when controlled at low false positive or false negative, through a well-calibrated Gaussian likelihood estimate.
In addition to supervised classification, MIAs have also been explored in other domains, including contrastive learning <cit.>, generative models <cit.>, federated learning <cit.>, graph neural networks <cit.>, and recommender systems <cit.>.
Defenses against membership inference attacks.
These defenses can be divided into provable and practical defenses.
The former can provide rigorous privacy guarantee, such as DP-SGD <cit.>, PATE <cit.>.
Nevertheless, these defenses often incur severe accuracy drop when used with acceptable provable bounds <cit.>.
Another line of practical defenses aim to achieve empirical privacy without severely degrading accuracy.
Common regularization techniques such as dropout <cit.>, weight decay <cit.> are shown to be able to reduce privacy leakage, but with limited effectiveness <cit.>.
Other defenses enforce specific optimization constraint during training to mitigate MIAs <cit.>, or perform output obfuscation <cit.>.
Knowledge distillation is used by different techniques to mitigate MIAs, including PATE <cit.>, DMP <cit.>, SELENA <cit.> and KCD <cit.>.
However, existing defenses are often biased towards either privacy or utility. In contrast, both achieves strong membership privacy and high accuracy, which offers a much better privacy-utility trade off.
Other privacy attacks.
In addition to membership privacy, common ML models are found to leak different private properties <cit.>.
Model extraction attacks can duplicate the functionality of a proprietary model <cit.>.
Model inversion attacks are capable of inferring critical information in the input features such as genomic information <cit.>.
Property inference attacks are constructed to infer sensitive properties of the training dataset <cit.>.
§ CONCLUSION
This work introduces , a defense against Membership Inference Attacks (MIAs) that can achieve both high accuracy and membership privacy.
has two innovations: (1) a training framework that consists of high-entropy soft labels and an entropy-based regularizer; and (2) an output modification defense that uniformly modifies the runtime output.
significantly constrains the model's overconfidence in predicting training samples, and
forces the model to behave similarly on both members and non-members, thereby thwarting MIAs.
Our evaluation shows that outperforms seven leading defenses by offering a better trade off between utility and membership privacy.
§ ACKNOWLEDGMENT
This work was funded in part by the Natural Sciences and Engineering Research Council of Canada (NSERC), and a Four Year Fellowship and a Public Scholar Award from the University of British Columbia (UBC).
IEEEtranS
§ APPENDIX
§.§ Details of Defense Setup
This section provides details of the defense setup in our evaluation.
For each dataset, we use 10% of the training set as a separate validation set (20% for Location30 as it has a smaller training size), and select the model with the highest validation accuracy.
. The values of entropy threshold γ and α parameter (for controlling the regularizer) are given in Table <ref>.
For model training on the two image datasets, in addition to the requirement of yielding high validation accuracy, we empirically set an additional condition that the model needs to gain at least 1% improvement on validation accuracy in order to be considered the best model.
This is to prevent the model gaining a marginal improvement on validation accuracy at the cost of significant overfitting on training samples, which could result in a large generalization gap.
Adversarial regularization <cit.>: The alpha parameter is for balancing classification accuracy and privacy protection. We set alpha as 3 for Purchase100 <cit.>, 10 for Texas100 <cit.>, 6 for CIFAR100 and CIFAR10 <cit.>, and 10 for Location30.
SELENA <cit.>: We follow the original authors to set K=25 and L=10, where K is the total number of sub models, and L means for a given training sample, there are L sub models whose training sets do not contain that particular sample.
For these L models, the given training sample can be viewed as an instance in their “reference set” for distillation.
Label Smoothing (LS) <cit.>: We follow <cit.> to train LS with different smoothing intensities and select the model with the highest accuracy.
Purchase100 is trained with a smoothing intensity of 0.03, Texas with 0.09 and CIFAR100 with 0.01.
DP-SGD<cit.>: We use PyTorch Opacus <cit.> to train the DP-SGD model.
We set microbatch size to be 1.
Purchase100 is trained with a noise_multiplier of 1.7, a norm clipping bound of 1.0 and with 200 epochs.
Texas100 is trained with a noise_multiplier of 1.44, a norm clipping bound of 1.0 and with 200 epochs.
Location30 is trained with a noise_multiplier of 2.91, a norm clipping bound of 3.0 and with 50 epochs.
§.§ Measuring Prediction Entropy by
As mentioned in Section <ref>, reduces privacy leakage from output scores via enforcing the model to predict training samples with higher entropy (i.e., less confident prediction on training samples).
We validate this by measuring the prediction entropy produced by the models before and after , and report the results in Table <ref>.
On the undefended models, the member samples are predicted with much lower entropy than that on non-members, and the entropy difference between members and non-members is 0.125∼0.343.
Such a large difference indicates the differential behavior on members and non-members that can be distinguished by the MIAs.
In contrast, the models trained with predict both members and non-members with much higher prediction entropy (increase by 4.1x∼19.8x), and the average difference between members and non-members is reduced from 0.125∼0.337 (on undefended models) to 0.006∼0.058, which is 6.5x∼32.7x smaller.
This demonstrates how enforces the model to behave similarly on members and non-members and therefore reduce privacy leakage.
§.§ Evaluating Label Smoothing with Different Smoothing Intensities
In Section <ref>, we compare with LS using the smoothing intensity that achieves the highest accuracy, and we found that achieves significantly lower MIA risk than LS.
In this section, we evaluate LS with other intensities that achieve similar accuracy improvement.
On Purchase100, we select a smoothing intensity of 0.03, which yields the highest accuracy improvement of 4.75%, and we consider all seven other intensities that achieve comparable accuracy improvement (3.8%∼4.4%).
Fig. <ref> presents the results, which show that LS trained with different intensities still exhibit very high MIA risk.
For example, the attack TPR @ 0.1% FPR by LS are 13.7x∼15.5x higher than that of , and the attack TNR are 8.2x∼12.4x higher than that of .
§.§ Comparison with Early Stopping
Early stopping produces models trained with fewer epochs to prevent overfitting.
In our evaluation, we benchmark the classification accuracy and attack TPR/TNR of the models trained with different epochs before convergence, and compare them with .
The results are shown in Fig. <ref>.
When the model is trained with a few epochs in early stopping, the model is able to achieve comparable privacy protection as , but with a large accuracy drop.
For example, on Purchase100, the model trained with 15 epochs yields an attack TPR of 0.67% and attack TNR of 1.01%, which are slightly higher than the 0.4% and 0.44% by .
However, its prediction accuracy is only 68.6%, which is much lower than the 81.15% achieved by .
The model's accuracy improves with more training epochs, but then so does the attack TPR and TNR.
When the models derived by early stopping converge, there is a substantial gap between the attack TPR and TNR of and early stopping (black dashed line vs. red solid line in Fig. <ref>).
To summarize, under a similar MIA risk for members (i.e., similar attack TPR), achieves an average 12.5% higher accuracy than early stopping; and 28.6% higher accuracy than early stopping when under similar attack TNR.
§.§ Varying the Parameters of
This section evaluates the performance of under different parameters, γ∈ (0.1, 0.9), α∈ (0.0001, 0.5).
We use Purchase100 and present the results in Table <ref>.
Entropy threshold.
A higher entropy threshold assigns lower probability to the ground-truth class in the labels and enforces the model to become less confident in predicting training samples.
For instance, for the entropy threshold of 0.9, the probability of the ground-truth class is only 20%,
while with a threshold of 0.1, the probability is 94%.
Table <ref> shows that a higher entropy threshold leads to a model with lower classification accuracy and also lower MIA risk (on both attack TPR and attack TNR).
The highest entropy threshold, 0.9, produces a model with the lowest test accuracy of 66.7% and the lowest attack TPR of 0.38% and attack TNR of 0.26%
Strength of regularization.
Stronger entropy-based regularization forces the model to produce outputs with higher uncertainty (uncertainty is measured by the prediction entropy), and is useful in preventing the model's overconfidence in predicting training samples.
The model exhibits strong resistance against MIAs when α is large (e.g., 0.05)
On the other hand, strong regularization results in a model with low classification accuracy.
This is because, when α is large, the overall loss term in objective (<ref>) is dominated by the second regularization term, while the first loss term for improving classification accuracy is not optimized sufficiently.
§.§ Overhead Evaluation
Training overhead.
We compare the training overhead of with AdvReg, SELENA, LS and DMP.
We do not compare training overhead with MemGuard as it is a post-processing technique that modifies the prediction vector during inference. Instead, we compare with its inference overhead.
For Purchase100, Texas100, CIFAR100, CIFAR10 and Location30,
the undefended models and the sub models in SELENA are trained with 100, 20, 100, 100 and 50 epochs;
For knowledge distillation in DMP and SELENA, we use 200, 100, 200, 200 and 100 epochs.
LS and are trained with 200, 100, 200, 200 and 100 epochs.
AdvReg is trained with 50, 20, 200, 200 and 50 epochs, respectively.
All models converged after training.
The overhead is measured on a single NVidia V100SXM2 GPU with 16 GB memory.
Each measurement is repeated 5 times and we report the average overhead.
The training overhead of each defense is shown in Table <ref>.
All defense techniques incur higher training cost compared with the undefended models (as expected), and LS incur the lowest training overhead among all the defenses (is slightly higher than LS).
AdvReg's overhead is 5.4x∼11.4x relative to that of , and DMP's overhead is 3.2x∼5.6x relative to that of .
SELENA's overhead is 4x∼8.8x relative to that of of .
Even though the latency of training multiple sub models in SELENA can be hidden by parallel training, its overhead is still 23%∼66% higher than that of .
Inference overhead.
We compare with MemGuard on their inference overhead (other defenses do not have a post-processing procedure, and hence their inference overheads are the same as the undefended model's).
For , the generation of random samples is independent of the runtime inference, so we first generate the random samples and obtain their output scores, and measure only the overhead of performing output modification (i.e., Line 13 in Algorithm <ref>).
We measure the inference overhead by performing inference on 500 random member and non-member samples (1,000 samples in total).
Table <ref> shows the average inference overhead per sample.
The overhead incurred by MemGuard is 25x∼1048x the overhead incurred by . This is because MemGuard requires solving a complex optimization to obfuscate the prediciton scores while only performs output modification on the prediction scores (Line 13 in Algorithm <ref>), which does not require solving any optimization.
§.§ Understanding the High Attack Performance by the NN-based Attack <cit.>
Fig. <ref> in our earlier evaluation shows that the NN-based attack <cit.> achieves the highest TPR with low FPR on the undefended models in many cases.
We explain the reason.
The NN attack trains an attack inference model on the known member and non-member samples, which outputs large values on members and small ones on non-members.
We first plot in Fig. <ref> the output distribution by the attack inference model to help understand how different thresholds affect the attack TPR and FPR.
The default NN attack uses a threshold of 0.5 and predicts any sample with an output >0.5 as a member.
As shown in Fig. <ref>, in order to maintain a low FPR, the attack switches to a larger threshold (as high as over 0.99 in our experiment).
In this case, low FPR can be achieved because most non-members are predicted with low values (the left region in Fig. <ref>).
Likewise, the attack achieves high TPR, because many members are predicted with large values (in the right most region in Fig. <ref>), and are correctly recognized as members.
§.§ Evaluation on Different Network Architectures
This section reports additional evaluation on models trained with different network architectures (using CIFAR10), including DenseNet-12 <cit.>, ResNet-18 <cit.>, MobileNet <cit.>, ShuffleNet <cit.>.
The results are shown in Fig. <ref>.
We find that models trained with different architectures exhibit disparate degrees of MIA risk, with the attack TPR @0.1% FPR being 6.47%∼30%, and the attack TNR @0.1% FNR 10.15%∼ 31.12%.
This gives an average attack TPR of 16.29% and attack TNR of 18.75%.
is able to consistently reduces the MIA risk, with the attack TPR on being 0.52%∼0.92% and the attack TNR 0.31%∼0.77%.
On average, reduces the attack TPR by 95.6% (from 16.29% to 0.72%) and the attack TNR by 97.5% (from 18.75% to 0.47).
Further, achieves such strong privacy protection with only a minor accuracy drop of 0.59% (at most 1.28%).
§.§ Detailed Attack AUC comparison
In Section <ref>, we report the average attack AUC on each defense in Fig. <ref>, and we provide the detailed results on each dataset in Fig. <ref>.
§.§ Full ROC Curves
The full ROC curves from the evaluation in Section <ref> can be found in Fig. <ref>.
§.§ Detailed Results for Each Attack
In Section <ref>, we reported the highest attack results among all evaluated attacks.
We now provide the detailed results for each attack for completeness (the correctness-based attack is omitted as it does not work when calibrated at a low FPR or FNR), and they can be found in Table <ref>.
Our results also find that label-only attacks are unsuccessful in inferring members and non-members when controlled at low FPR and FNR regime - this is also a known issue found on many score-based attacks by Carlini et al. <cit.>.
We use the boundary-based attack <cit.> as an example to illustrate.
We first plot the perturbation distance on the members and non-members in Fig. <ref>.
As shown, though perturbing the training members requires more perturbations than the testing samples, the distance is not well separated enough to be calibrated for inferring members with low false positive (hence with a low 0.12% [email protected]% FPR), or inferring non-members with low false negative (hence with a low 0.1% [email protected]%FNR).
|
http://arxiv.org/abs/2307.01292v1
|
20230703185347
|
Pareto-Secure Machine Learning (PSML): Fingerprinting and Securing Inference Serving Systems
|
[
"Debopam Sanyal",
"Jui-Tse Hung",
"Manav Agrawal",
"Prahlad Jasti",
"Shahab Nikkhoo",
"Somesh Jha",
"Tianhao Wang",
"Sibin Mohan",
"Alexey Tumanov"
] |
cs.CR
|
[
"cs.CR",
"cs.AI",
"cs.LG"
] |
definitionDefinition
proof[1em]0em
references.bib
Clockwork
Pareto-Secure Machine Learning () : Fingerprinting and Securing Inference Serving Systems
Debopam Sanyal
Georgia Tech
Jui-Tse Hung
Georgia Tech
Manav Agrawal
Georgia Tech
Prahlad Jasti
Georgia Tech
Shahab Nikkhoo
UC Riverside
Somesh Jha
UW Madison
Tianhao Wang
University of Virginia
Sibin Mohan
The George Washington University
Alexey Tumanov
Georgia Tech
==========================================================================================================================================================================================================================================================================================
With the emergence of large foundational models, model-serving systems are becoming popular. In such a system, users send the queries to the server and specify the desired performance metrics (e.g., accuracy, latency, ). The server maintains a set of models (model zoo) in the back-end and serves the queries based on the specified metrics.
This paper examines the security, specifically robustness against model extraction attacks, of such systems. Existing black-box attacks cannot be directly applied to extract a victim model, as models hide among the model zoo behind the inference serving interface, and attackers cannot identify which model is being used. An intermediate step is required to ensure that every input query gets the output from the victim model.
To this end, we propose a query-efficient fingerprinting algorithm to enable the attacker to trigger any desired model consistently. We show that by using our fingerprinting algorithm, model extraction can have fidelity and accuracy scores within 1% of the scores obtained if attacking in a single-model setting and up to 14.6% gain in accuracy and up to 7.7% gain in fidelity compared to the naive attack.
Finally, we counter the proposed attack with a noise-based defense mechanism that thwarts fingerprinting by adding noise to the specified performance metrics.
Our defense strategy reduces the attack's accuracy and fidelity by up to 9.8% and 4.8%, respectively (on medium-sized model extraction).
We show that the proposed defense induces a fundamental trade-off between the level of protection and system goodput, achieving configurable and significant victim model extraction protection while maintaining acceptable goodput (>80%).
We provide anonymous access to our code. [Two links: https://anonymous.4open.science/r/psml-1057/README.mdPSML Repo and https://anonymous.4open.science/r/clockwork-ppml-C1CE/README.mdModified Clockwork Repo]
§ INTRODUCTION
In recent years, the deployment of inference serving systems <cit.> to serve machine learning models in modern web applications has witnessed a significant surge, enabling efficient and scalable model predictions in applications.
The main benefit of using them is that they abstract away the burden of choosing the suitable model from the zoo of models for each client. These systems make model serving flexible, as the appropriate model can be served based on the client's requirements, like accuracy, latency, memory, power consumption, However, this proliferation raises significant concerns regarding the privacy and security of these ML serving systems. Of particular concern is the fact that the model owner stores proprietary models within the system, potentially exposing valuable intellectual property. Furthermore, these model zoos can contain hundreds or thousands of vulnerable models, each specialized for a different latency budget or deployment scenario.
Black-box attacks <cit.> pose a substantial threat to the privacy and security of inference serving systems. These attacks allow adversaries to exploit vulnerabilities within the system and extract sensitive information, including the victim model's functionality, training data, architecture, and parameters. Surprisingly, despite the increasing adoption of inference serving systems, practical demonstrations of black-box attacks on these systems are not as common. Current research in the field often assumes the adversary's ability to route all queries to the victim model by explicitly specifying it, which is an outdated and impractical assumption for state-of-the-art inference serving systems <cit.> that prioritize flexibility to cater to diverse user requirements. In inference serving systems where the model is specified by the user <cit.>, there is a significant under-utilization of resources due to the developers' lack of knowledge and the dynamic nature of the system <cit.>. Consequently, a pressing need arises to explore effective mechanisms that adapt existing black-box attacks, such as model extraction, to model-less inference serving systems.
This paper demonstrates the need for fingerprinting techniques to enable practical black-box attacks on model-less inference serving systems. To the best of our knowledge, this work is the first to showcase the viability of such attacks on inference serving systems without requiring explicit model specification by the user. We emphasize model extraction, where the adversary's motivation revolves around stealing highly valuable models stored within the system. Our research introduces a novel and query-efficient fingerprinting-based attack that achieves comparable performance on an inference serving system, in terms of accuracy and fidelity, to traditional black-box attacks conducted in a single-model setting. Our fingerprinting algorithm enables black-box attacks to have accuracy and fidelity within 1% of the corresponding values in the single-model setting while spending the same number of queries. We show up to 14.6% gain in accuracy and up to 7.7% gain in fidelity than the naive attack without fingerprinting while spending only 4000 queries.
Additionally, we propose a novel defense approach to counter our attack. The defense mechanism is based on the introduction of noise to the performance specifications of queries (throughout the paper, we focus on the most frequently-used metrics, model accuracy and latency, although there are other metrics; our attack and defense can be extended to other metrics), effectively disrupting the fingerprinting process employed by the adversary. Our defense strategy reduces the accuracy and fidelity of the attack by up to 9.8% and 4.8%, respectively, compared to the scores obtained with our fingerprinting algorithm. It is especially successful in protecting medium-sized victim models. It is important to note that a trade-off exists between our defense's effectiveness and the system's service level objective (SLO), measured by goodput. Despite this trade-off, we demonstrate that our defense can give significant protection while maintaining acceptable goodput values (>80%). fig:overview captures a high-level conceptual overview.
We instantiate the proposed attack and defense mechanisms in our Pareto-Secure Machine Learning system (), integrating it with a state-of-the-art inference serving system <cit.>. builds on the following principal contributions:
* A generic and query-efficient fingerprinting algorithm that enables practical black-box machine learning attacks on model-less inference serving systems.
* A noise-based defense strategy that effectively reduces the success of fingerprinting-based attacks on inference serving systems.
* Configurable levels of defense, inducing a trade-off space between system goodput and level of protection, , the ability to achieve robust protection while maintaining reasonable system performance.
§ BACKGROUND AND RELATED WORK
§.§ Black-box Attacks during Inference
Black-box attacks are attacks where the adversary lacks knowledge of the victim model's parameters, architecture, or training data. Machine Learning as a Service (MLaaS) are systems on which such attacks are usually carried out. In every query, the user typically submits an input and receives either a prediction vector or a class label from an already trained model hosted in the cloud. Most of these attacks are carried out during inference and, thus, on inference serving systems. Such attacks aim to obtain information not meant to be disclosed, such as the training data or details about the model. These attacks assume that the outputs received by the adversary are all from the victim model; hence, we call them single-model attacks in this work. Model extraction is one such class of black-box attacks.
§.§.§ Model Extraction
The adversary's goal in model extraction is to replicate the functionality of the victim model by creating an extracted model <cit.>. It leverages the ability to query the victim model and observe its outputs, which are utilized to train the extracted model. Task accuracy attacks involve creating a model that matches the victim model's accuracy on a test set derived from the input data distribution. Fidelity attacks, however, aim to maximize the similarity between the victim and extracted models on the test set. Fidelity can be defined as the ratio of points in the test set on which both the victim and the extracted models have the same output labels. Attackers with problem domain data require fewer queries, and access to output labels alone is adequate for them to extract a model. In both scenarios, the adversary aims for efficiency, striving to minimize the number of queries used. While some works assume knowledge of the target model architecture, it is not strictly essential. The attack described in <cit.> is on inference serving systems, but it assumes that the adversary can access the victim model repeatedly from the model zoo.
A wide range of defenses against single-model extraction has been proposed. One way is to perturb the output of models ( <cit.>), but it's not a viable option for the system as a legitimate client also receives perturbed outputs. Another way is to detect malicious clients ( <cit.>), but these methods assume that adversarial queries have small l_2 distances between them and are a mix of natural and synthetic queries. This assumption does not hold in the MixMatch-based <cit.> extraction attack we use for our experiments <cit.>. Watermarking ( <cit.>) is yet another defense method against model extraction. It embeds a secret pattern in the model during inference or training, but it requires post hoc analysis and the model owner to have access to the extracted model.
§.§ Inference-Serving Systems
TensorFlow Serving <cit.> was one of the first dedicated ML serving systems, although it was limited to models in the TensorFlow framework. Clipper <cit.> was later developed to use general frameworks and make it modular for anyone to deploy a model. It utilizes techniques such as caching and adaptive batch sizes to speed up queries. Amazon Sagemaker <cit.> and NVIDIA Triton Inference Serving <cit.> were some of the first publicly released serving systems officially offering inference as a service to satisfy business use cases. These systems have the advantage of large infrastructures backed by Amazon and NVIDIA, as Sagemaker autoscales based on the inference load, and Triton optimizes inference on NVIDIA GPUs.
All of these systems require that the user specifies the model used for inference, which may not satisfy all use cases. Black-box attacks are possible in these inference serving systems because the client gets to choose the victim model to serve all its queries. InferLine <cit.> provides serving for a pipeline of models by planning resource allocation for each model and tuning as necessary during execution. A prominent problem is a lack of developer understanding of the trade-off space of accuracy/latency among variants of a model, such as the ResNet family <cit.>. Therefore, instead of having the developer specify the model to query, model-less systems that query from model zoos have arisen.
INFaaS <cit.> generates variants for every model deployed to its zoo during the profiling process, and it navigates the trade-off space of these variants on the user's behalf. The clients simply provide the latency and accuracy bounds with their inputs in INFaaS.
However, many of these systems do not take advantage of the predictability of inference latency for a model. Only recently has this attribute been studied, by systems such as <cit.> and iGniter <cit.>. achieves predictability by discarding queries that take too long, as well as by reducing the choice of resource allocation and program execution at the hardware, OS, and application level in order to reduce variability in latency. iGniter utilizes a proactive strategy to mitigate interference between GPUs in order to remove a source of variability. This determinism allows for a model zoo to be represented as a Pareto frontier when plotted by its latency and accuracy, exhibiting the positive correlation between these two attributes in ML models. However, both of these systems require clients to specify the models they want along with the input.
Secure inference serving systems have been proposed to protect against malicious clients <cit.>. These works employ secure cryptographic protocols to protect against attacks that try to steal private inputs and parameters of the model. We consider an adversary (sec:threat:adversary) that is only interested in extracting the functionality of a model and not its weights or training data.
§.§.§ Model Zoo, Pareto Frontier and Feasibility Set
Models that are pre-trained for a specific task like image recognition can be coalesced into a repository known as a model zoo. For our purposes, it will be used to provide possible model selections for to serve when performing inference. If we plot each model in the model zoo as a point on a scatter plot, such that the x-axis is the model's inference latency and the y-axis is its accuracy, we can select a subset of points that are preferred to the other points. This subset constitutes a Pareto frontier. It is a subset of points such that no point in the subset is strictly better than any other point when plotted against its chosen attributes. In our context, it means that no model will have both a lower latency and a higher accuracy compared to any other model in the Pareto frontier.
(Pareto frontier)
Let a_m and l_m be the accuracy and latency values of any model m.
For any (a_i,l_i),(a_j,l_j)∈ℝ^2, the Pareto frontier 𝒫 is:
𝒫 = {(a_i,l_i)|{(a_j,l_j)|(a_j>a_i) (l_j<l_i)}=ϕ},
where
(a_i,l_i)≠(a_j,l_j) and i∈{1,…,n} with n being the total number of models in the Pareto frontier. Thus, 𝒫⊆ℝ^n× 2. Because it is better to minimize latency and maximize accuracy, the Pareto frontier will form at the top left of the region of points representing the model zoo, as shown in fig:eval:mzoo. The Pareto frontier will serve as the backbone of the fingerprinting algorithm, as it provides the adversary a path of traversal across the model zoo.
The feasibility set of a query is the set of models that satisfy the latency and accuracy specifications of this query. It is a subset of the Pareto frontier of the model zoo. The manner in which the serving system makes a model selection from the feasibility set, such as aggregation, random selection, round-robin, , provided that the set is non-empty, is known as the system's policy. If the accuracy and latency specifications of a query are acc and lat, respectively, then the feasibility set is ℱ = {(a_i,l_i)∈𝒫|(a_i≥ acc) (l_i≤ lat)}.
§ THREAT MODEL AND MOTIVATION
§.§ System Model
We consider an inference serving scenario where models are loaded in the system like in <cit.>, and it is the responsibility of the inference serving system to select a model for each query that satisfies the accuracy and latency specified by the query, like in INFaaS <cit.>.
To demonstrate that single-model attacks against a victim model from a model zoo work in the simplest inference serving scenario, we make the following assumptions:
* Single client: There is only one client using the inference serving system. The client is the adversary.
* Closed loop: The client only sends a single inference query at a time, and the client waits till the response for the current query comes back before sending the next query.
* Cross-hair interface: The model serving endpoint accepts inference queries with specified minimum accuracy requirement a_req and maximum latency l_req requirement. A model
will be randomly selected from the feasibility set defined by a_req and l_req. If the feasibility set is empty, an “infeasible set” error will be returned to the client. Please note that the feasibility set is a subset of the Pareto frontier of the model zoo. This means the system serves models exclusively from the Pareto frontier like in <cit.>. A default value of 0 is set if an accuracy requirement is not provided.
* Granularity and boundary: We assume there is a granularity for accuracy (a_gran) and a granularity for latency (l_gran) that the inference serving system keeps track of beyond which adjacent values are indistinguishable. a_gran is less than the minimum difference between the accuracy values of any two consecutive models on the Pareto frontier. Similarly, l_gran is less than the minimum difference between the latency values of any two consecutive models on the Pareto frontier. For instance, the inference serving system might only keep track of accuracy to 0.1% and latency to 1 millisecond. Additionally, we assume that there is an upper bound for latency l_up_bound in the system.
We make the above assumptions for ease of experimentation because our goal here is to show the practicability of a black-box attack on an inference serving system and not to show how complex such systems can become. Both our attack and defense will work even without the above assumptions. As long as clients can query the system and the system picks models by navigating a trade-off space, the attack, and defense are viable.
§.§ Adversary Model
Adversary Goal: To extract the most accurate model from the model zoo, given any latency budget and a query budget. It does not know the accuracy or latency specifications needed to target this model. The latency budget L is associated with every query. The adversary will try its best not to violate this budget for as many queries as possible so that it doesn't incur additional costs per query.
Type of Attack: This is an end-user attack because the adversary disguises itself as any other client trying to query the inference serving system for getting predictions on its input, using the cross-hair interface. We assume that the adversary does not know the accuracy or latency values of any model in the model zoo. We also assume that the adversary does not know the number of models present in the Pareto frontier of the model zoo.
Information Leakage: The critical information that gets leaked is the accuracy and latency of each query along with the prediction made by the selected model. The (accuracy, latency) information can be seen as auxiliary information acquired by the adversary. It is the crucial information that enables the attacker to learn more about the Pareto frontier of the model zoo with every query it sends to the system.
§.§ Naively Attacking the Model Zoo Inference Serving System
The MixMatch attack can be used without any modification to extract a model from a model zoo as per the adversary goal. The client has to simply specify its latency budget to the inference serving system. Since it doesn't know the accuracy of its victim model, it doesn't provide an accuracy specification (, the default value 0 is selected). We conducted the MixMatch attack on a model zoo ( fig:eval:mzoo) that we trained on the CIFAR-10 dataset and separately ran experiments for the single-model setting as well, where the only model is the victim model. tab:motivate_ensemble_attack shows the accuracy and fidelity values obtained after training MixMatch for 1024 epochs. It is clear that the attack is significantly weaker in the model zoo setting as compared to the single-model setting. The model zoo can function as a layer of protection for the victim model, which means that naively using black-box single-model attacks will not be very successful on a model zoo unless there is a way to route all the queries to the victim model. Thus, an intermediate step is required to fingerprint all the models in the model zoo.
§.§ Proposed Attack
The intermediate step seeks to determine the accuracy and latency specifications that will reliably trigger the victim model for every query. The step must run in as few queries as possible because the adversary has a query budget. We introduce a fingerprinting algorithm that allows an adversary to obtain the accuracy and latency of all the models in the Pareto frontier of a model zoo while expending a reasonably low number of inference queries. Fingerprinting is done for all models because the adversary's latency budget may change (, the victim model changes). We seek to determine an efficient algorithm that traverses the Pareto frontier to gain precise accuracy and latency values for each model. After this step is complete, the adversary uses the accuracy and latency values of the victim model as specifications for its queries for the subsequent labeling process in the MixMatch attack. In short, this intermediate step intends to invert the obfuscation that comes with a lack of model architecture specification in the query.
§.§ Proposed Defense
Since our primary goal is to stop single-model attacks on the model zoo, we concentrate on obscuring the Pareto frontier during inference serving so that the auxiliary information is less useful to the attacker. Since the inference serving system only serves points from the Pareto frontier, we only want to protect the models on the Pareto frontier, not those under it. To this end, we employ a Laplace noise addition mechanism that changes the feasibility set for every query by adding noise to the accuracy and latency specifications of the query. The mechanism results in a modified feasibility set which offers more protection by potentially including a model that was not in the original feasibility set or by excluding a model that was in the original feasibility set. This technique comes at the cost of utility as the initial accuracy and latency specifications may not be satisfied.
A defense should increase security while not causing harm to legitimate clients. Therefore, it is useful to measure the drop in performance with increased security. We will measure the system's performance using the goodput. In our context, it is defined as the ratio of successful queries served by the inference serving system while satisfying the accuracy and latency specifications of the queries. With the defense, some models that are served will violate either the accuracy specification or latency specification, or both. This is because our noise addition mechanism modifies the feasibility set for every query. While the noise addition mechanism decreases the accuracy and fidelity of the extracted model, it inevitably reduces the goodput of the inference serving system.
Injecting noise to achieve privacy goes back to the early Differential Privacy (DP) works <cit.>. Since then, noise-based protection mechanisms have been increasingly used in diverse applications like inference <cit.>, feature extraction <cit.>, cloud <cit.>, and systems <cit.>.
§ ATTACK ON INFERENCE SERVING SYSTEMS
§.§ The Fingerprinting Problem
Given a model serving endpoint, an adversary wants to find the accuracy and latency profile for every model on the Pareto frontier. The adversary can leverage this information to send inference queries that consistently trigger the victim model in the model zoo. Therefore, the fingerprinting problem can be defined as simply determining the Pareto frontier 𝒫 (Eq. <ref>), which is unknown to the adversary.
We introduce two functions: f_acc^L(𝒫):ℝ^n× 2→ℝ and f_lat^L(𝒫):ℝ^n× 2→ℝ, that return the accuracy and latency of the victim model, respectively, given the latency budget L.
After the fingerprinting step, the adversary has the accuracy and latency values of all the models in the Pareto frontier. Next, it has to select the victim model based on these values and its latency budget L. In Algorithm <ref>, we fingerprint every model, regardless of the adversary's latency budget, so that the adversary doesn't have to fingerprint the Pareto frontier again. Even if the adversary's latency budget changes (increases or decreases) in the future, it can just pick the new victim model based on the values obtained from the fingerprinting step. However, the flexibility offered by fingerprinting every model comes at a price. Because the adversary violates its latency budget for some queries during fingerprinting, it incurs an extra cost for these queries.
The adversary picks row k in the Pareto frontier matrix 𝒫, which has the largest latency value, l_k, below its latency budget L. Thus, f_lat^L(𝒫) =l_k. Then, it selects the accuracy value a_k from row k.
Thus, f_acc^L(𝒫) = a_k. The obtained pair of values (a_k, l_k) serve as the adversary's accuracy and latency specifications of every subsequent query to the inference serving system. Therefore, the latency budget is strictly obeyed by the adversary after fingerprinting. Our defense mechanism in sec:defense uses f_acc^L(𝒫) and f_lat^L(𝒫).
§.§ Fingerprinting Algorithm
The intuition behind our novel fingerprinting algorithm is inspired by binary search. However, instead of finding the position of the models, we want to find the precise accuracy and latency values of the models. We take advantage of the fact that the system serves models exclusively from the Pareto frontier (sec:threat:system). Due to the accuracy-latency correlation in the Pareto frontier, the search space is already sorted. We use this path for traversing the search space in our algorithm.
First, given an inference serving endpoint, we perform a binary search to find the accuracy of the most accurate model on the Pareto frontier in the model zoo. To do this, we first send a query with (a_req, l_req) = (0.5, ∞). If the query gets a response without error, it means there is at least one model in the model zoo with a_model >= 0.5, and we send another query with (a_req, l_req) = (0.75, ∞). On the other hand, if the query gets an error in response indicating that no model in the model zoo satisfies the requirement, we send another query with (a_req, l_req) = (0.25, ∞). We stop the binary search process when we find a_max such that querying with (a_req, l_req) = (a_max, ∞) gets a successful response and querying with (a_req, l_req) = (a_max + a_gran, ∞) gets an error.
Secondly, we use binary search to find the latency of the most accurate model on the Pareto frontier in the model zoo. We first send a query with (a_req, l_req) = (a_max, 1/2× l_up_bound). If the query gets a response without error, it means the most accurate model in the model zoo has latency greater than 1/2× l_up_bound, and we send another query with (a_req, l_req) = (a_max, 3/4× l_up_bound). On the other hand, if the query gets an error in response indicating that the most accurate model has latency less than 1/2× l_up_bound, we send another query with (a_req, l_req) = (a_max, 1/4× l_up_bound). We stop the binary search process when we find l_max such that querying with (a_req, l_req) = (a_max, l_max) gets a successful response and querying with (a_req, l_req) = (a_max, l_max - l_gran) gets an error.
Next, in a similar manner, we perform a binary search to find the accuracy and latency of the second most accurate model in the model zoo within the boundary 0 ≤ a < a_max and 0 ≤ l < l_max. Then, we adjust the boundary and find the third most accurate model. The process continues till we find the accuracy and latency of all models in the model zoo. We return the result as a 2D matrix, identified_models, where the first column is f_acc(𝒫) and the second column is f_lat(𝒫). Algorithm <ref> describes the fingerprinting algorithm in pseudo-code.
We measure the efficiency of the fingerprinting algorithm in terms of query complexity, , the total number of queries an adversary needs to expend to get the accuracy and latency of all models in the Pareto frontier. The worst case query complexity of the fingerprinting algorithm is O(n(log1/a_gran + logl_up_bound/l_gran)), where n is the number of models in the Pareto frontier since we perform binary searches for both accuracy and latency for every single model in the model zoo. However, based on our experiments in which we simulated model zoos containing models with random accuracy and latency values, we found that the fingerprinting algorithm has a linear time average query complexity (fig:fingerprinting-query-complexity). fig:fingerprinting-query-complexity-acc-gran and fig:fingerprinting-query-complexity-lat-gran illustrate the effect on the linear regression coefficient by changing the accuracy or latency granularities.
§ DEFENDING AGAINST FINGERPRINTING
In order to protect the victim model in a model zoo, we must find a way to prevent fingerprinting from being successful. It is easy to see from Algorithm <ref> that adding noise to the accuracy and latency values of models on the Pareto frontier would disrupt fingerprinting. However, the accuracy of a model in the zoo cannot be changed, and the latency can only be increased (by adding delay), not decreased. Another possibility is to add noise to the profiled values of accuracy and latency of each model in the server (fig:PSML_architecture). Fingerprinting now would return noise-induced accuracy and latency values of the victim model. However, since we do not know when the adversary is fingerprinting and when it is collecting labeled examples for MixMatch, the modified values would continue to be used for picking models to be served. Thus, the adversary would still be able to target the victim model with the noise-induced accuracy and latency specifications it received from the fingerprinting step.
We propose adding noise to the input query's accuracy and latency specifications. This causes the fingerprinting algorithm to function incorrectly. The disruption happens in lines <ref> & <ref> of Algorithm <ref>, where infer means querying the system. The accuracy and latency specifications of the query are perturbed. The system reads in these perturbed values and serves a model from the “modified” feasibility set. Since the latency specification of every query is the latency budget, there is a probability of inferior models being served (the system never violates the latency specification (Algorithm <ref>)).
The noise addition scheme will remain even after the fingerprinting step because the system does not know when a malicious client is fingerprinting. This makes the subsequent querying process uncertain, adding extra protection to the system.
It is easy to see that our defense strategy will work against any single-model attack, not just model extraction. However, for ease of experimentation (sec:eval:defense), we only use the MixMatch extraction attack to show the viability of our defense.
Specifically, we will add Laplace noise to f_acc^L(·) and f_lat^L(·), introduced in sec:attack:problem.
We follow the measurement used in differential privacy (DP) to quantify the amount of noise added. Note that we do not claim our method will satisfy DP. More formally, we need to quantify the maximal possible change of both
accuracy and latency values, denoted by Δ f_acc^L and Δ f_lat^L, respectively.
This way, we can measure the strengths of noise for different functions using a single parameter (ϵ).
We discuss implementation details in sec:impl.
Algorithm <ref> describes, in pseudo-code, how we serve models with the defense. The key idea is not to serve models with latency values greater than the latency specification. The input latency specification, lat, is assumed to be the latency budget for every query after the fingerprinting step. Noise addition will change this latency value to noisy_lat, resulting in the system possibly returning a model with a latency greater than the latency specification. To prevent this, we have the third condition in the if statement in line <ref>.
§ SYSTEM DESIGN AND IMPLEMENTATION
Figure <ref> illustrates our system design used to perform experiments with our proposed attack and defense mechanisms. Building upon the assumptions outlined in sec:threat:system, we leveraged as the inference serving system. Over 's client library, we implemented a shim layer (Server) in that exposes an inference API. This API allows clients to submit inference queries with specific minimum accuracy specification acc, maximum latency specification lat, and input to perform the inference. Thus, the entire system becomes model-less.
Upon loading models into the model zoo, the server maintains a copy of the accuracy and latency profiles for each model. The training and profiling of models are done on NVIDIA GeForce RTX 2080 Ti GPUs. In the absence of any defense mechanism, upon receiving an inference query, the Server randomly selects a model from the feasibility set based on a uniform distribution. In other words, every model in the feasibility set has an equal probability of being chosen. The Server then forwards the inference query input to , which serves the query using the selected model, and relays the output from back to the client. An error is returned if no model satisfies both the accuracy and latency constraints (, the feasibility set is empty).
With the defense mechanism active, the server introduces Laplace noise to acc and lat before determining the feasibility set (Algorithm <ref>). The Laplace noise is calculated using the boost::math::laplace_distribution and boost::math::quantile functions from the boost library. Since the latency specification of any query cannot be violated, the system doesn't serve models with latency values greater than the input latency specification. This case may arise when the noise-induced latency exceeds the latency specification.
§ EVALUATION
§.§ Experiment Setup
We evaluate our fingerprinting-based attack and our noise-based defense on a real-world inference serving system, <cit.>. Since the client has to specify a model to perform inference when using , we implement our own model selection logic as a shim on top of . We use a policy of serving any model within the feasibility set randomly with equal chance.
We employ a model extraction attack (the MixMatch attack in <cit.>), which we treat as a black-box, on a zoo of image classification models trained on the CIFAR-10 <cit.> and SVHN <cit.> datasets, like in the original paper. To populate the model zoos for the two datasets, we did not train the models to their full potential on the respective training sets, so that we could represent a realistic Pareto frontier. Training all the models we selected to their full potential on CIFAR-10 or SVHN will result in all models being in the high-accuracy region (≥ 90% accuracy), which does not reflect a realistic Pareto frontier. Additionally, this would not allow us to show that a medium or small model (< 80% accuracy) can be successfully:
* extracted using our fingerprinting algorithm, or
* protected against the fingerprinting-based attack using our noise-based defense
We loaded the models, shown in fig:eval:mzoo and fig:appendix:mzoo_svhn, into Clockwork. The extracted model (, the model that the attacker starts with) is a ResNet-50, which can reach > 95% accuracy on CIFAR-10 and SVHN. Thus, the model has enough expressive power to learn weights from the CIFAR-10 and SVHN train sets. Based on the client’s desired accuracy and latency specifications, the query is routed to a model that satisfies them. Otherwise, the system doesn’t fulfill the query and sends an “infeasible set” error message to the client.
It is not possible to train the models that we use to convergence on simple datasets like CIFAR-10 and SVHN, while maintaining a realistic Pareto frontier.
We wanted to demonstrate our attack and defense on a realistic Pareto frontier of a model zoo, from which large, medium, and small models can be extracted. Inference serving systems only serve models from the Pareto frontier of the model zoo. Thus, for ease of implementation, the Pareto frontier is the entire model zoo itself in our experiments. This simplification has no bearing on the attack and defense mechanisms. It is important to note that our attack and defense have nothing to do with the training, architecture, or weights of the neural network models in the zoo. Therefore, our methods are plug-and-play, , no modification is needed on real-world model zoos that are usually dense and have models trained to their full potential on complex datasets like ImageNet <cit.>.
The 12 image classification models in our model zoo are comprised of various architectures of ResNets, WideResNets, DenseNets, and MobileNets. All of these models have predictable inference latency values. According to the definition of the Pareto frontier of a model zoo, models with larger inference latency have higher accuracy. While training our models, we followed this accuracy-latency correlation, as shown in Figure <ref>. We consider three querying scenarios with differing latency budgets available to the adversary: large, medium, and small latency budgets. Since with a larger latency budget the adversary's victim model is a larger-sized model, we also call these scenarios the large, medium, and small model extraction cases. The query budget available to the adversary is 4000 queries in all cases. The inputs of these queries are randomly selected from the training set of the datasets. Hence, the adversary has to do the fingerprinting and querying for MixMatch within 4000 queries. We show how the adversary's success varies with different query budgets in fig:attack_queries.
We set up a shim layer on top of Clockwork (see sec:impl), which includes the model selection logic that utilizes the accuracy and latency specifications of the client. The selected model information is relayed to the Clockwork controller node, which subsequently assigns the inference task to the appropriate worker node, and the inference is performed on this model using the input provided by the client.
In our experiments, the inference time of the query is the same as that of the model. This is due to the assumptions we make in our inference serving system in sec:threat, where there is no added end-to-end latency before or after the inference arising from scheduling or queuing delays. Please note that the actual inference time of the query is not used by the adversary at all in our attack.
§.§ Attack
We aim to demonstrate our fingerprinting algorithm's potential in adapting single-model attacks to inference serving systems. We show that our fingerprinting algorithm improves the extracted model accuracy and fidelity compared to using the black-box attack without the fingerprinting step. It is important to note that fingerprinting takes a constant number of queries for a given Pareto frontier, when acc_gran and lat_gran are fixed. This is because Algorithm <ref> is deterministic.
We run attacks in each setting on both datasets using a predetermined query budget.
A query budget of q means that the attacker can query the system at most q number of times. Thus, it has to do the fingerprinting and get labeled points within these q queries. The attacker’s latency budget per query for the large-model setting is 21 ms, while it is 13 ms for the medium-model setting and 5 ms for the small-model setting. Thus, the victim model according to our adversary goal in sec:threat:adversary is DenseNet-161 in the large-model setting (top right corner of the Pareto frontiers in fig:eval:mzoo and fig:appendix:mzoo_svhn), ResNet-101 in the medium-model setting (middle of the Pareto frontiers) and MobileNetV3-small in the small-model setting (bottom left of the Pareto frontiers). In each test, we let the MixMatch attack
train for 1024 epochs. Fidelity is measured against the victim model on the test set of the datasets.
§.§.§ Experiment Results
The experiment results are shown in tab:acc_attack_defense and tab:fid_attack_defense. fig:eval:defense shows the plots for the medium model. Results with SVHN are in tab:svhn_attack_acc_svhn in Appendix <ref>. We show a relation between the number of queries available to the attacker and the fidelity and accuracy of the extracted model in fig:attack_queries. In the single-model attack, all 4000 queries were answered by the victim model. In the no-fingerprinting and fingerprinting attacks, all the queries were served using the server.
A total of 411 queries were used for fingerprinting the model zoo trained on CIFAR-10. According to our threat model in sec:threat:adversary, the attacker does not know the accuracy of the highest-accuracy model in the model zoo that has latency lower than its latency budget. On the other hand, the accuracy and latency values for all the models in the zoo were determined by the fingerprinting step in the fingerprinting attack. Therefore, in the subsequent step of querying the system, the attacker simply chooses the accuracy and latency specifications corresponding to the model with the highest inference latency below its latency budget (see sec:attack:problem).
It is clear from tab:acc_attack_defense that the adversary can fingerprint the entire model zoo by expending a relatively low number of queries. This ability enables it to determine precise accuracy and latency values of the victim model.
The test fidelity and test accuracy results show that a black-box attack on a model zoo can be as successful as an attack on a single model. We can also see that the single-model attack is slightly better. This is because the number of queries used to fingerprint the model zoo is subtracted from the total query budget of the adversary. This means there are fewer labeled points from the victim model for the MixMatch attack. The main takeaway from these experiments is that an adversary can extract the highest accuracy (or largest) model from a model zoo for any given latency budget using our fingerprinting algorithm.
§.§ Defense
To demonstrate that our defense works on an actual inference serving system, we evaluate our noise-based defense mechanism on <cit.>. The goal of the defense is to undermine the fingerprinting step. By introducing random noise in the accuracy and latency specifications
given as input by the attacker, we hope to reduce the accuracy of the fingerprinting step. At the same time, we do not want to destroy the utility of the inference serving system. Hence, we study the impact of adding noise on the system's goodput. The Laplace mechanism in sec:defense is applied to the fingerprinting algorithm that is treated as a function. Since the protection against the extraction attack will be at odds with the system's performance, we hope to show the trade-off between them in this section through our experiment results.
We conduct our defense experiments on the same settings under which the attack is tested. Since the fingerprinting step is vital for the attack to succeed, we aim to see to what extent our defense mechanism reduces the effectiveness of the fingerprinting step. Ideally, we would like to reduce the fidelity and the accuracy of the extracted model to the values obtained when the attack was run without the fingerprinting step (tab:acc_attack_defense, tab:fid_attack_defense). We add noise to the latency and accuracy specifications of the client. The rationale behind this is that by changing the specifications of the model that the inference serving system has to pick, we reduce the chances of successfully fingerprinting any model from the zoo.
The performance of the inference serving system is measured in terms of goodput, defined in section <ref>. The system's goal is to be as performant as possible while maintaining a required degree of protection from fingerprinting. Our defense mechanism only changes the client's accuracy and latency specifications
to the system and not the quality of the models on the Pareto frontier.
In the experiments, we set the system with sensitivity values according to the model zoo shown in fig:eval:mzoo: Δ f_acc = 1 and Δ f_lat = 18.271 for CIFAR-10.
For each dataset, we demonstrate extracting a large, medium, and small model. In these three scenarios, the latency budgets of the attacker are 21 ms, 13 ms, and 5 ms, respectively, for CIFAR-10. Our experiments verify whether launching the black-box single-model attack (MixMatch) with fingerprinting on a system that has our defense shows accuracy and fidelity comparable to that in the no-fingerprinting case (, naive attacker using single-model attacks on a model zoo without fingerprinting the zoo first).
In the defense, the number of queries taken to do the fingerprinting is not constant for a given model zoo because of the noise in the system. Moreover, not all queries will be answered by the system. A breakdown of the number of queries used in each step is presented in tab:queries_fingerprinting_labeling. Since the number of queries taken for defense is not constant, we did three trials for each setting and reported the average. After querying, the MixMatch method is trained for 1024 epochs on the points in the training set.
fig:eval:queries shows the probability distribution of the models getting triggered in our different attack scenarios.
§.§.§ Experiment Results
We observe that the attack is less successful in terms of accuracy and fidelity with the defense. With ϵ=10, we get up to 4.8% drop in the fidelity and 9.8% drop in accuracy when compared with the fingerprinting-based attack without the defense, in the medium model case (fig:eval:defense). The closer the value of accuracy and fidelity is to those in the no-fingerprinting setting, the better the protection against the model extraction attack. From tab:acc_attack_defense and tab:fid_attack_defense, we see that the fidelity and accuracy scores decrease with decreasing epsilon values. This is because lower epsilon results in more noise. And more noise in the system results in more queries being routed to models that are not the victim model.
Due to noise in the system, the adversary's query may be routed to a model with a latency larger than the query's latency specification. In such situations, the system does not send the output to the query, as shown in Algorithm <ref>. Since the adversary's latency specification is its latency budget for all queries after the fingerprinting step, all the outputs after fingerprinting are from models that have inference latency less than the adversary's latency budget. Due to the relationship between accuracy and latency on the Pareto frontier, this also means that all queries are answered by either the victim model or by models with accuracy less than the accuracy of the victim model.
Since higher protection comes with lower performance, we calculated the goodput scores of these runs. We observe in fig:eval:gdpt_noeps that goodput decreases with decreasing epsilon values. The reason behind this is the fact that a lower epsilon value means more noise. More noise in the system means there is a high chance the query is being served by a model from outside the feasibility set, which is defined by the client's unmodified accuracy and latency specifications. Goodput greater than 0.8 (≥ 80% queries meet specifications provided by the client) is generally agreed to be acceptable for an inference serving system. From fig:eval:gdpt_noeps, we can see that goodput greater than 0.8 is obtained with ϵ = 50 in the medium model extraction, while the fidelity and accuracy of the attack reduce significantly. Hence, the system designer can use this ϵ value to effectively defend against fingerprinting-based attacks while maintaining an acceptable quality-of-service (QoS). The main takeaway is that a balance between protection and performance can be reached by configuring ϵ in our defense mechanism for medium and small models.
We see that the defense is more effective in the medium and small model settings than in the large model setting. This is because while extracting a large model, even a large amount of noise in the system may result in a high-accuracy model being served. Since high-accuracy models result in high-quality labeled examples, the MixMatch attack will not suffer much, and the resulting accuracy values will remain fairly high. We provide a lower bound on fidelity while extracting a large model in Appendix <ref>. Thus, our defense mechanism with the tested ϵ values is more effective for attacks targeting medium to small models. However, ϵ can be configured to suit the system designer's needs.
§ CONCLUSION
We introduce a novel query-efficient fingerprinting algorithm that enables black-box attacks like model extraction to be carried out on inference serving systems. Our fingerprinting-based attack on a model-less inference serving system performs significantly better than a naive attack and comes close to the attack in a single-model setting while using the same number of queries. We show the generality of our attack by showing the efficacy of our method when extracting large, medium, and small models from a model zoo hidden behind a model-less interface. To counter the proposed attack, we design and implement a novel defense mechanism based on noise perturbation to protect against fingerprinting. It is able to weaken the fingerprinting attack for different model sizes successfully.
Finally, the defense is shown as a trade-off between system performance and security, achieving a configurable, good level of security while simultaneously maintaining reasonable system goodput.
This work is the first step towards thinking about practical black-box attacks on state-of-the-art model-less inference serving systems and defending proprietary served models against such attacks.
plain
§ ATTACK RESULTS ON SVHN
This section presents the results of using our fingerprinting-based attack on a model zoo trained on the SVHN dataset.
§ PROOF OF LOWER BOUND ON FIDELITY FOR LARGE MODELS
How is the fidelity of the extracted model with respect to the victim model related to their accuracy scores on the same test set?
To justify the defense, we need to show that it reduces the fidelity and accuracy scores. Extraction with noise in the system will result in an “aggregate” extracted model. Thus, our goal is to show that this aggregate extracted model is poor in terms of fidelity and accuracy, i.e., it has a low fidelity score with respect to the victim model. It is safe to assume that a high-accuracy feasibility set will result in a high-accuracy extracted model. So now we will try to minimize the fidelity score between an aggregate extracted model and a victim model.
Let us calculate the minimum fidelity possible for an aggregate extracted model (i.e., obtained with the defense) from a high-accuracy feasibility set.
Let D be the test set of points for a classification task with k classes. Let S_i be the set of points from D that model m_i classifies correctly. There are two models of concern here: m_v (the victim model) and m_e (the extracted model).
Let a_v and a_e be the accuracy scores on D of models m_v and m_e, respectively. We assume that a_v≥0.9 and a_e≥0.9, as the extracted model m_e is obtained from querying a high-accuracy feasibility set and the victim model m_v lies in this feasibility set.
Let n(A) stand for the number of distinct elements in the set A. Let F be the fidelity set, i.e., the set of all such points in D for which m_v and m_e predict the same class. F can be broken down into two disjoint sets:
* a set of points on which both models make the correct prediction, i.e., (S_v∩ S_e).
* a set of points on which both models make incorrect predictions but predict the same class. We denote this set by P.
Thus, F = (S_v∩ S_e) + P and n(F) = n(S_v∩ S_e) + n(P)
n(S_v∩ S_e) = n(S_v) + n(S_e) - n(S_v∪ S_e) = a_v.n(D) + a_e.n(D) - n(S_v∪ S_e)
Hence, n(F) = a_v.n(D) + a_e.n(D) - n(S_v∪ S_e) + n(P)
We can minimize the above as follows:
min(n(F)) = a_v.n(D) + a_e.n(D) - max(n(S_v∪ S_e)) + min(n(P)) = a_v.n(D) + a_e.n(D) - n(D) + 0 = n(D).(a_v + a_e - 1)
Thus, the minimum fidelity score is min(n(F))/n(D) = (a_v + a_e - 1)≥0.8
Conclusion: Attacking a high accuracy (or large) model results in a relatively high fidelity score (≥0.8) even with the defense. Therefore, we need to attack a lower accuracy (or small) model to get a significant reduction in fidelity (or improvement in protection) with the defense.
|
http://arxiv.org/abs/2307.02559v1
|
20230705180133
|
AdS3 Orbifolds, BTZ Black Holes, and Holography
|
[
"Emil J. Martinec"
] |
hep-th
|
[
"hep-th"
] |
-.25in
-.25in
0.5in
-.25in
0.5in
===
colorlinks=true,
linkcolor=myBlue,
urlcolor=blue,
filecolor=black,
citecolor=myRed,
hyperref
om
#1
#2
#2
1(<ref>)
om
#1
#2
#2
#1(<ref>)
x
η_L^
η_R^
τ_ E^
c_ST^
g_^
g_^
α
β
γ
δ
ϵ
λ
φ
ϑ
η
μ
ν
π
σ
τ
sl
su
su
u(1)
α̅
δ-1ptλ
λ
ϵ
ε
ptcl
str
BH
BTZ
naïve
SL(2,)
SU(2)
SU(2)
U(1)
F1
_3^+
#1
#1#1
#1 #1
r
p
()
[
]
i.e.
e.g.
c.f.
etc
et.al.
cl
qu
ST
WS
NS
ext
eff
osc
tot
maxmax
minmin
ℓ_pl
m_pl
ℓ_s
α'_little
g_s^
g_s^2
g_YM
BH
BTZ
φ
ℓ_AdS
R_AdS
pf
þth
n_5
n
k̃
n-2
k+2
α
α̅
n_1
ϕ
(m)
𝖠
𝖡
𝖢
𝖥
𝖧̋
𝖪
A
B
𝖠
𝖡
𝖢
𝖣
𝖥
𝖦
𝖧
𝖨
𝖩
𝖪
𝖫
𝖬
𝖭
𝖯
𝖲
𝖴
𝖵
𝖶
𝖷
𝖸
𝖹
𝖺
𝖻
𝖼
𝗀
𝗃
𝗄
𝗆
𝗇
𝗉
𝗊
𝗋
𝗍
𝗏
𝔤
largesymbols"62
largesymbols"65
kern[1]#1@kerna
ỹ
ŷ
R_y
R_
Φ
Φ
Ψ
Λ
ẑ
η̂
η̂̅̂
/
1/2
#1#2#1/#2
12
Tr
Tr
1-1mm l
D̅
ch
sh
[ ℂℍℕℙℚℝ𝕊𝕋ℤℤ
equationsection
="2D
𝒜ℬ𝒞𝒟ℰℱ𝒢ℋℐ𝒥𝒦ℒℳ𝒩𝒪𝒫𝒬ℛ𝒮𝒯𝒰𝒱𝒲𝒳𝒴𝒵ℂℙℝℙ J L+.5mm 1-.8mm l0-1.5mm 0
#1⟨ #1 ⟩|0⟩
∂̣
adj𝕀id1/212⟩⟨
𝕀1 -.28em l#1 #2(#3), #1 #2 (19#3) #1 #2(#3) #1 #2 (19#3) i.e. e.g. c.f. et.al. etc
#1 #1
#1#2#1/ #21/2|#⟩1|#1⟩⟨#|1⟨#1|#1⟨#1⟩∂̣
1 -3pt ln^þ
`=11
#1@̧ncel#1=0pt
λϵε#1#2@#1_#2<∼⪅<∼`=12
2pt height 4.1pt depth -.3pt width
.25pt -2pt×
⟨#|1⟨ #1||#⟩1| #1⟩#1⟨ #1 ⟩ det
mod
sgn detexp exp
Ø O P N i.e. e.g.x̅z̅θ̅ψ̅ϕ̅α̅β̅γ̅θ̅a̅b̅i̅j̅k̅ℓ̅ṃ̅̅Q LG st ws
0.01cm
Kadanoff Center for Theoretical Physics, Enrico Fermi Institute, and Department of Physics
University of Chicago,
5640 S. Ellis Ave.,
Chicago IL 60637
[email protected]
Conical defects of the form (AdS_3×^3)/_ have an exact orbifold description in worldsheet string theory, which we derive from their known presentation as gauged Wess-Zumino-Witten models.
The configuration of strings and fivebranes sourcing this geometry is well-understood, as is the correspondence to states/operators in the dual CFT_2.
One can analytically continue the construction to Euclidean AdS_3 ( ) and consider the orbifold by any infinite discrete (Kleinian) group generated by a set of elliptic elements γ_i∈, γ_i^_i=, i=1,...,K. The resulting geometry consists of multiple conical defects traveling along geodesics in , and provides a semiclassical bulk description of correlation functions in the dual CFT involving the corresponding defect operators, which is nonperturbatively exact in α'. The Lorentzian continuation of these geometries describes a collection of defects colliding to make a BTZ black hole. We comment on a recent proposal to use such correlators to prepare a basis of black hole microstates, and elaborate on a picture of black hole formation and evaporation in terms of the underlying brane dynamics in the bulk.
1cm
AdS_3 Orbifolds, BTZ Black Holes, and Holography
Emil J. Martinec
August 1, 2023
==================================================
emptyarabic
empty
§ INTRODUCTION AND SUMMARY
The AdS_3/CFT_2 correspondence has proven one of the richest examples of gauge/gravity duality. On the one hand, two-dimensional conformal symmetry is especially powerful, providing a remarkable degree of analytic control over the conformal field theory.
On the other, there is a large catalogue of exact, asymptotically AdS_3 supergravity solutions (see Mathur:2005zp,Bena:2007kg,Shigemori:2020yuo for reviews). In addition, there are exactly solvable worldsheet theories describing perturbative string theory around global AdS_3 Giveon:1998ns,Kutasov:1999xu,Maldacena:2000hw,Maldacena:2001km, as well as certain conical defect geometries Martinec:2017ztd, and the BTZ black hole Natsuume:1996ij,Maldacena:2000kv,Hemming:2001we,Hemming:2001we,Troost:2002wk,Hemming:2002kd.
The (AdS_3×^3)/_ conical defect geometries, whose worldsheet description was elaborated in Martinec:2017ztd,Martinec:2018nco,Martinec:2019wzw,Martinec:2020gkv,Bufalini:2021ndn,Martinec:2022okx, provide rare examples where one has a fully stringy description of bulk states far from the vacuum as well as their holographic map to the spacetime CFT. These backgrounds arise as particular geometries sourced by bound states of fundamental (F1) strings and NS5-branes in a decoupling limit. There is a precise map between the geometry and the fivebranes' source configuration which is in turn determined by the particular condensate of fundamental strings they carry Lunin:2002bj,Kanitscheider:2006zf,Kanitscheider:2007wq. This map is reflected in the structure and properties of worldsheet vertex operators Martinec:2020gkv,Martinec:2022okx, in that winding string vertex operators act to perturb the background string condensate.
Our goal here is to develop and generalize the worldsheet theory of these orbifold geometries, and their connection to gauge-gravity duality.
The worldsheet theory was initially constructed as a gauged WZW model
/ = ××_t×^1_y ×^4/U(1)_L× U(1)_R
where embeds in as a pair of left and right null isometries. These worldsheet CFT's in fact describe more general asymptotically linear dilaton geometries rather than asymptotically AdS_3, but have a modulus (the radius R_y of ^1_y) for which AdS_3 asymptotics arises in the R_y→∞ limit.
It is essential for the purposes of the present work to have a direct worldsheet description of the conical defects (AdS_3×^3)/_ as orbifold CFT's. To this end, in section <ref> we show how the physical state spectrum of the theory (<ref>) becomes that of a standard orbifold in the R_y→∞ limit, and exhibit the orbifold identification. We also show that the linear dilaton extension can be thought of as a current-current deformation of the orbifold CFT, where the current is none other than the =AdS_3×^3 part of the gauge currents , of the / gauging. In hindsight, this result appears to be a special case of a general relation between gauged WZW models (×)/, current-current deformations of WZW models on , and their orbifolds Forste:2003km.
The description of a class of AdS_3 conical defects as string theory orbifolds paves the way for a series of generalizations that occupy the remainder of our analysis. With the standard orientation of the orbifold group, the defect sits statically at the origin of AdS_3 and (as we will see below) along a circle in ^3. But one can use the _L×_R isometries of AdS_3 to conjugate the orbifold identification into a different but equivalent _ action, which in general describes boosted and orbiting defects.
The standard defect geometries (an example of which is shown in figure <ref>) are dual to particular heavy states |Ψ_⟩ in the spacetime CFT; and to each such state is associated the operator _(z,z̅) that creates it from the CFT vacuum |Ω⟩ via
|Ψ_⟩ = _(0,0)|Ω⟩ .
One can track the 1/2-BPS operators across the moduli space of the spacetime CFT to weak coupling, where the spacetime CFT is a symmetric product orbifold (^4)^N/S_N.
The CFT operators _ are known symmetric group orbifold twist operators Lunin:2002bj; in fact the holographic map for all 1/2-BPS states is understood Kanitscheider:2006zf,Kanitscheider:2007wq.
The conjugation operation
_(z,z̅) = e^z _-1+z̅_-1 _(0,0) e^-(z _-1+z̅_-1)
is the Euclidean CFT version of a global _L×_R isometry of AdS_3 – on the conformal boundary of Euclidean AdS_3, these symmetries translate the operator to a more general position. The translated operators are dual (after analytic continuation to Lorentz signature) to the bulk conical defect oscillating radially about the center of AdS_3 as in figures <ref>, or boosted and rotating as in figure <ref>.
Having moved the defect off to one side in AdS_3, one is free to introduce another; and another, and so on. The collision of such defects has been studied in Matschull:1998rv,Holst:1999tc,Birmingham:1999yt,Krasnov:2002rn,DeDeo:2002yg,Brill:2007zq,Lindgren:2015fum; we review and extend these results in section <ref>. With sufficient aggregate mass and/or radial momentum, the collision of the defects forms a BTZ black hole.
As we will see below, the collision of _ orbifold defects always makes a black hole. We will examine the geometries arising from the global orbifolds describing colliding _ defects, and exhibit some of their properties, first in section <ref> for multiple defects moving purely radially as in figure <ref>, and then in section <ref> for multiple defects orbiting as in figure <ref>. The example of two _2 defects radially boosted in opposite directions is shown in figure <ref>.
The presence of the BTZ singularity causes perturbative string theory to break down. This was seen in the similar situation of Lorentzian orbifolds of flat spacetime string theory Liu:2002kb,Horowitz:2002mw.
Indeed, we shouldn't expect perturbative string theory to resolve all the issues arising from black holes in quantum gravity. But one thing we can do is rotate the theory to Euclidean signature where the black hole singularity, as well as other pathologies such as closed timelike curves, are absent. Geometrically, Euclidean BTZ is /; the worldsheet theory has been studied in Natsuume:1996ij,Maldacena:2000kv,Hemming:2001we,Troost:2002wk,Hemming:2002kd,Rangamani:2007fz,Berkooz:2007fe,Lin:2007gi,Ashok:2020dnc,Nippanikar:2021skr,Ashok:2021ffx,Ashok:2022vdz,Kaundinya:2023xoi. Here we extend the discussion to orbifolds /Γ where Γ∈ is a particular class of discrete (Kleinian) group generated by elliptic transformations γ_i∈, γ_i^_i= (see also Krasnov:2001va,Chandra:2023dgq).
[The discussion can of course be extended to arbitrary Kleinian groups, providing a string theory realization of the ideas in Maloney:2015ina.]
We thus examine the continuation of Lorentzian AdS_3 to its Euclidean counterpart in section <ref>. We first show how the _L×_R isometries of AdS_3 are related to the isometry of under analytic continuation, and then exhibit the corresponding defect geometries. For radially boosted ( non-orbiting) defects, the geometry can be chosen to have a surface of time reflection symmetry where all the orbifold defects are momentarily at rest in AdS_3. This spacelike hypersurface has the geometry of the Poincaré disk _2 which is also a surface of time reflection symmetry of under the natural Wick rotation of the global time in AdS_3; furthermore, Γ acts as a set of identifications within this _2. The two geometries share this common hypersurface _2/Γ, and one can think of passing from one to the other along this hypersurface – as one does for instance in the Schwinger-Keldysh formalism Haehl:2016pec,Haehl:2016uah,deBoer:2018qqm, or the Hartle-Hawking method of state preparation via a period of Euclidean evolution Hartle:1983ai. For the case of orbiting defects, there is no time reflection symmetry, but we can still say how the full spacetimes are related.
Figure <ref> depicts the analytic continuation to Euclidean signature of the Lorentzian geometry sourced by two _2 defects shown in figure <ref>. We are then solidly grounded in the arena of standard, well-behaved string theory orbifolds, and the construction of perturbative string theory is in principle straightforward.
The Euclidean defect geometries again have the defects traveling geodesics in . These are circles which orthogonally intersect the ^2 conformal boundary of . The surfaces being identified to make the defect are spherical wedges identified under a _ elliptic element of , the isometry group of .
The worldsheet theory in the presence of a single defect provides a route to computing (order by order in an expansion in powers of the string coupling) the spacetime CFT correlation functions involving several “light” operators in the presence of two “heavy” operators. Here the light operators are those that introduce and remove perturbative strings, and therefore have spacetime conformal dimension h,h̅ much less than the central charge of the spacetime CFT. The two heavy operators have conformal dimension h,h̅∼ O(), and serve to introduce and remove the defect Galliani:2017jlg,Bombini:2017sge,Bufalini:2022wzu. Correlation functions of string theory on have been investigated in Kutasov:1999xu,Maldacena:2001km,Teschner:1999ug,Teschner:2001gi,Ponsot:2002cp,Hikida:2007tq,Dei:2021xgh,Dei:2021yom,Dei:2022pkr,Bufalini:2022toj, and these particular single-defect correlators are closely related due to the orbifold structure Bufalini:2022wzu.
The more general orbifolds of /Γ describe correlators of light operators in the presence of multiple heavy operators. Such correlators have been studied from the spacetime CFT point of view for instance in Chandra:2023dgq, where they were used to investigate properties of the holographic map, and in particular whether observables are inside or outside the apparent horizon at the surface of time reflection symmetry. The construction here allows the possibility to put these ideas to the test in string theory. For instance, the worldsheet two-point function should be the sort of diagnostic of the holographic embedding envisaged by Chandra:2023dgq, and should provide a useful check of this proposal.
The string worldsheet correlators for the non-abelian orbifolds (×^3)/Γ thus compute the semi-classical approximation to spacetime CFT correlation functions involving multiple heavy and light operator insertions. Their evaluation is a rather involved technical exercise, which we will leave to future work; we can however at least characterize them. For instance, in the effective supergravity theory, two-point functions of light operators in the orbifold background will be given by Green functions for the appropriate Laplace operator on /Γ. The untwisted sector consists of solutions of the wave equation on that are invariant under Γ, which can formally be constructed by the method of images (regulating and renormalizing the sums involved); we will touch on this issue briefly below in section <ref>. In addition, each defect will have its own twisted sectors which describe wound strings pinned to the defect. There will also be a continuum of “long string” states Maldacena:2000hw that have plane wave behavior out near the conformal boundary and thus surround all of the defects. The latter are related to the Hawking radiation of wound strings discussed recently in Martinec:2023plo,Martinec:2023iaf.
As in the case of a single defect, one expects the winding string vertex operators on (×^3)/Γ to perturb the string condensate carried by the fivebranes, just as they do for a single defect. Non-abelian fivebrane dynamics underlies that of BTZ black holes in string theory and is governed by little string theory Dijkgraaf:1997nb,Dijkgraaf:1997ku,Maldacena:1996ya,Seiberg:1997zk, with the black hole microstates being accounted for by the Hagedorn entropy of the little string Maldacena:1996ya,Martinec:2019wzw; in perturbative regimes, fundamental strings are composites of the little string whose condensation coherently deforms the fivebrane state Martinec:2020gkv,Martinec:2022okx. There have been suggestions McGreevy:2005ci,Horowitz:2006mr,Berkooz:2007fe that a perturbative string winding condensate is involved in the resolution of the BTZ singularity; the effect of such a condensate would however naively be expected to be confined to the region where the proper size of such strings is of order the string scale, which is quite close to the singularity. But one needs effects that persist out to the horizon scale which is arbitrarily larger than the string scale; it is hard to see how effects involving perturbative strings near the singularity could affect the black hole interior sufficiently to resolve the fundamental tension between causality, locality and unitarity posed by the Hawking process. As observed in Martinec:2019wzw, however, the little string tension is such that it is always at its correspondence point, which would mean that the quantum wavefunction of the little string could extend to the horizon scale, in accordance with the fuzzball paradigm (see Mathur:2005zp,Bena:2022rna for reviews). One may hope to gain enough control over the worldsheet construction to begin to see the outlines of this scenario take shape, as the conical defect collisions described here start to form a BTZ black hole. Indeed, we will take some first steps in this direction in section <ref> when we consider two _2 defects in the extremal limit, where their OPE is BPS protected and produces a Hagedorn little string gas.
The wide variety of (×^3)/Γ orbifold geometries described here all lead to pure state BTZ black holes under Lorentzian continuation. This method of preparing black hole states has been discussed recently in Chandra:2022fwi,Chandra:2023dgq; and also in Balasubramanian:2022gmo,Balasubramanian:2022lnw, where it was suggested that such states could form a basis of black hole microstates. The analysis in these works seeks to be quite general, applying to any holographic duality (and especially for AdS_3/CFT_2). It is however quite useful to drill down on a specific instance of the duality where we know a lot about its embedding in string theory, so that we can be sure we are not presupposing properties of the duality for which there are no concrete realizations, or objects whose holographic map is poorly understood. Furthermore, a major unresolved issue in these constructions is what happens to the state under Lorentzian evolution once we have prepared it. The string theoretic construction we provide here yields some hints in that regard, given our extensive knowledge of the holographic map and in particular its stringy properties. We will elaborate on these issues in section <ref>.
§ NULL-GAUGED WZW MODELS AND ORBIFOLDS OF ADS_3
We approach the description of (AdS_3×^3)/_ orbifolds via a detour into the null-gauged WZW models studied in Martinec:2017ztd,Martinec:2018nco,Martinec:2019wzw,Martinec:2020gkv,Bufalini:2021ndn,Martinec:2022okx. These models have linear dilaton rather than AdS asymptotics, and so do not directly describe the AdS_3 conical defects; however they have a modulus R_y such that the AdS orbifold geometry is recovered in the limit R_y→∞. Our strategy will be to deduce the orbifold action on × from the AdS limit of the null-gauged model (<ref>), and then relate the general null-gauged model to a marginal current-current deformation of this orbifold. For those less interested in the details of the worldsheet theory, this section can be skipped on a first reading.
These isometries are generated by left and right null currents
= J^3_ + ℓ_2 J^3_ + ℓ_3 P_t +ℓ_4 P_y
= J̅^3_ + r_2 J̅^3_ + r_3 P̅_t + r_4 P̅_y ,
where both and current algebras are at level n_5.
Integrating out the gauge fields leads to an effective sigma model geometry with the linear dilaton asymptotics of a decoupled fivebrane throat, while a suitable choice of null vector coefficients ℓ_i,r_i
ℓ_2 = -( +) ∈ 2+1
,
r_2 = -∈ 2+1
, ,∈
ℓ_4 = /R_y+ R_y
,
r_4 = /R_y - R_y
, ,∈
ℓ_3 = r_3 = - √(^2 R_y^2 +^2/R_y^2 + n_5(^2+^2-1)) ,
subject to the constraint
= -n_5 ,
leads to IR asymptotics of the form (AdS_3×^3)/_ and UV asymptotics given by the linear dilaton throat of the n_5 fivebranes Martinec:2017ztd,Martinec:2018nco,Bufalini:2021ndn.
When ==0. =1, the backgrounds are the 1/2-BPS conical defect geometries of Lunin and Mathur Lunin:2001fv,Lunin:2001jy; generalizing to -=1 (while satisfying (<ref>)) realizes the 1/4-BPS backgrounds of Giusto:2004id,Giusto:2012yz; the generic solution breaks all spacetime supersymmetries and describes the “JMaRT” family of solutions of Jejjala:2005yu,Chakrabarty:2015foa.
The crossover between the UV linear dilaton regime and the IR AdS_3 regime is controlled by the modulus R_y. Heuristically, the null gauging relates the azimuthal direction of =AdS_3 to the circle ^1_y. The proper size of the remaining gauge invariant circle grows exponentially with radius as it does in AdS_3 until it saturates at the scale R_y and the geometry rolls over into the linear dilaton background.
Sending R_y→∞ decouples the linear dilaton regime, so that the entire geometry takes the form (AdS_3×^3)/_. In this limit, the gauge current becomes dominated by the contributions from _t×^1_y, and the gauge orbits largely lie along these directions. It is then convenient to fix the gauge t=y=0, which leaves behind a residual discrete symmetry that acts to quotient of × = AdS_3 ×^3.
The structure of the gauge orbits was described in detail in Martinec:2018nco,Martinec:2019wzw.
§.§ 1/2-BPS embeddings ↪
Consider to begin with the 1/2-BPS geometries; the general element (e^iα,e^iβ)∈ U(1)_L× U(1)_R acts on as
( g_, g_, t, y, ỹ) ↦( e^iασ_3 g_ e^iβσ_3,
e^-iασ_3 g_ e^-iβσ_3,
t- R_y(α+β),
y+ R_y(α-β),
ỹ- R_y(α+β) )
where ỹ is the coordinate that parametrizes the circle T-dual to ^1_y. The orbits of the axial gauge motion α=-β are compact with period 2π. We use this gauge motion to fix y, but this only uses up a fraction 1/ of the gauge orbit; there is a residual _ discrete remnant, which implements a discrete quotient on ×.
The spectrum of consists of current algebra descendants built on the highest weight vertex operators
Φ^(w)_j,m,m̅ Ψ^(w',w̅')_j',m',m̅' exp[ -iEt+in_y y+iw_yỹ] ,
where Φ^(w)_j,m,m̅ is a primary in the spectral flow sector w of supersymmetric current algebra, and similarly Ψ^(w',w̅')_j',m',m̅' is a primary in the spectral flow sector (w',w̅') of the supersymmetric theory. The axial null constraint -=0 imposes
( m-m̅) + (M-M̅) - ( m'-m̅'+n_5/2(w'-w̅') + (M'-M̅') ) = k n_y
where M-M̅ is the net J^3_-J̅^3_ charge of the descendants (and similarly for M'-M̅' and ). The effect of the residual discrete gauge symmetry is to project the momentum along the × component of the axial gauge orbit onto an integer multiple of , just what would expect for a _ orbifold projection. The result is an orbifold if the rest of the BRST invariant spectrum accounts for the twisted sectors of the orbifold. Let us see how this arises in the R_y→∞ limit.
The left/right y momenta are
P_y = n_y/R_y+w_yR_y
, P̅_y = n_y/R_y - w_y R_y .
We then write
E = w_y R_y + /R_y
since the string winding accounts for the bulk of the energy in the limit of large R_y; since the constraints are solved order by order in R_y, is independent of R_y at leading order. The null constraints in this limit set
(+n_y) = 2m+n_5 w-(2m'+n_5w') + M-M'
(-n_y) = 2m̅+n_5 w-(2m̅'+n_5w̅') + M̅-M̅' ,
while the Virasoro constraints in this limit impose
= -j(j-1)+j'(j'+1)/n_5 - (m+M)w+(m'+M')w'+n_5/4(w^2-(w')^2) + w_y(+n_y) +N_L
= -j(j-1)+j'(j'+1)/n_5 - (m̅+M̅)w+(m̅'+M̅')w̅'+n_5/4(w^2-(w̅')^2) + w_y(-n_y) +N_R
where N_L,N_R are the oscillator excitation levels.
Substituting the values of ± n_y from the null constraints into the Virasoro constraints, one sees that the effect of w_y in the Virasoro constraints is a simultaneous shift
w→ w+ w_y/ ,
w'→ w'+ w_y/ , w̅'→w̅'+ w_y/
which can be thought of as a simultaneous fractional spectral flow in both and (the quadratic terms in w_y cancel between the two, leaving just the terms linear in w_y). In other words, the spectrum of vertex operators with t,y present is the same as that of =× alone, with an orbifold projection on the untwisted sector and fractional spectral flow accounting for the twisted sectors.
Bosonizing the currents in terms of canonically normalized scalar fields _,_
J^3_ = √(n_5) ∂_ , J̅^3_ = √(n_5) ∂̅_ ,
J^3_ = √(n_5) ∂_ , J̅^3_ = √(n_5) ∂̅_ ,
one can factor out the charge dependence from the vertex operators (<ref>) as
Φ^(w)_j,m,m̅ = V^(w)_j,m,m̅ exp[2/√(n_5)( m+n_5/2 w )_ + 2/√(n_5)( m̅+n_5/2 w )_]
Ψ^(w',w̅')_j',m',m̅' = Λ^(w',w̅')_j',m',m̅' exp[2/√(n_5)( m'+n_5/2 w' )_ + 2/√(n_5)( m̅'+n_5/2w̅' )_] .
The null gauged WZW is then equivalent in the R_y→∞ limit to a standard _ shift orbifold generated by
δ(_-_) = -δ(_ - _) = 2π/ .
The axial null constraint restricts the difference of the momenta J^3_+J̅^3_ and J^3_+J̅^3_ to be a multiple of , and the effect of the vector null constraint when inserted into the Virasoro constraints is to add in the twisted sectors with fractional winding. This shift orbifold is nothing but the discrete remnant of the gauge motion (<ref>) left after gauge fixing t and y, as discussed above.
§.§ General embeddings ↪
More generally, the shift orbifold acts on × as an asymmetric shift orbifold Martinec:2018nco.
[Asymmetric orbifolds are discussed for instance in Narain:1986qm,Narain:1990mw.]
Parametrizing the group manifold via Euler angles
[We find it convenient to work with SU(1,1) rather than , as that diagonalizes the symmetries being gauged.]
g_ = e^i/2(τ-σ)σ_3 e^ρσ_1 e^i/2(τ+σ)σ_3 ,
g_ = e^i/2(ϕ-ψ)σ_3 e^i(π/2-θ)σ_1 e^-i/2(ϕ+ψ)σ_3 ,
the orbifold identifies the group manifold under the shift Jejjala:2005yu,Chakrabarty:2015foaδ(σ,ϕ,ψ) = 2π/( 1,,) .
which in the bosonized representation (<ref>), (<ref>) amounts to
δ(_-_) = 2π/ , δ_ = -(+)2π/ , δ_ = (-)2π/ .
The null constraints are now
(+n_y) = 2m+n_5 w +M - (+)(2m'+n_5w'+M') -[(+)^2-1/2]n_5w_y
(-n_y) = 2m̅+n_5 w+M̅ - (-)(2m̅'+n_5w̅'+M̅') -[(-)^2-1/2]n_5w_y ,
while the Virasoro constraints are as before, equation (<ref>). The axial null constraint at large R_y imposes
- n_y =
m-m̅+M-M̅ - (+)(m'+n_5/2w'+M' ) + (-)( m̅'+n_5/2w̅'+M̅' ) - n_5/ w_y .
In the sector w_y=0, this constraint imposes that some linear combination of left/right and J^3 charges is a multiple of (recall that n_5/∈). This is the untwisted sector of the orbifold; when 0 the orbifold action is left/right asymmetric on the worldsheet. Note that one can regard this untwisted sector constraint as imposing on the momenta along σ,ϕ,ψ the condition that P_σ+ P_ϕ + P_ψ be a multiple of , which is precisely what one would deduce from the identification (<ref>). The twisted sectors of this asymmetric orbifold are again labelled by w_y, and amount to the fraction spectral flow
w→ w + w_y/ ,
w'→ w' + (+)w_y/ , w̅' →w̅' + (-)w_y/ ,
which amounts to adding strings that close only under the identifications (<ref>).
The state in the dual CFT has quantum numbers
h = N/4( 1+(+)^2-1/^2)
,
J_ = N/2 +/
h̅ = N/4( 1+(-)^2-1/^2)
, J̅_ = N/2 -/
where J_ is the J^3 component of the -symmetry (similarly for J̅_).
The state is 1/4-BPS (excited on the left) when -=±1, and 1/2-BPS when =0, =±1. We will mostly be interested in these cases, for which the states are in the R-R sector of the spacetime CFT Jejjala:2005yu,Giusto:2012yz,Chakrabarty:2015foa. More generally, the states are in the R-R sector for + odd, and in the NS-NS sector for + even; the latter are all non-BPS.
§.§ States in the dual CFT
The BPS states of the NS5-F1 system are simple to describe in terms of the T-dual along NS5-P system. There, they are simply BPS waves on the fivebranes. In a sector where the fivebranes are twisted into a single fivebrane wrapping n_5 times around ^1_y, momenta are fractionated by a factor of n_5; one can then have any number n_k^I of modes with momenta k/N in any of 8 bosonic and 8 fermionic polarizations I, subject to the overall constraint
∑_k,I k n_k^I = N .
We can thus label the 1/2-BPS spectrum via
| {n_k^I}⟩ ,
and this labeling passes through the T-duality to the NS5-F1 frame where the labels refer to a collection of (generically fractional, unless k is a multiple of n_5) winding strings carried as fivebrane excitations. The bosonic excitations split into the four scalars X^αα̇ that describe the gyration of the fivebrane in its transverse space, and four more comprising a gauge multiplet on the fivebrane.
[For type IIB in the NS5-F1 frame, the polarization labels refer to the gauge multiplet A^AB on the T-dual fivebrane consisting of a scalar and a self-dual antisymmetric tensor. The scalar A^[AB] is typically referred to in the literature as the “00” mode.]
The _ orbifold geometries are dual to the states
| {n_^++=N/, others=0 }⟩ .
The 1/2-BPS supergravity solutions of the NS5-F1 system can be put in a standard form <cit.> (restricting for simplicity to solutions with pure NS-NS fluxes; for the general solution, see Appendix B of Martinec:2022okx)
ds^2 = ^-1[ -(dτ+)^2 + (dσ + )^2 ] + d· d + ds^2_ ,
B = ^-1(dτ + )∧(dσ+) + _ij dx^i∧ dx^j ,
e^2Φ = / , d = *_⊥ d,
d = *_⊥ d ,
where are Cartesian coordinates on the transverse space to the fivebranes, related to × Euler angles via
x^1+ix^2 ≡ x^++ = coshρ sinθ e^iϕ ,
x^3+ix^4 ≡ x^-+ = sinhρ cosθ e^iψ .
The harmonic forms and functions appearing in this solution can be written in terms of a Green's function representation,
which in the AdS_3 decoupling limit takes the form
= 1/2π∑_m=1^∫_0^2πdṽ/|-_(ṽ)|^2 , = 1/2π∑_m=1^∫_0^2πdṽ _·_/|-_(ṽ)|^2 ,
= _i dx^i ,
_i = 1/2π∑_m=1^∫_0^2πdṽ ^_i(ṽ)/|-_(ṽ)|^2 ,
involving source profile functions ^i_(ṽ), m=1,2,…,, that describe the locations of the fivebranes in their transverse space (overdots denote derivatives with respect to ṽ).
We choose twisted boundary conditions for the source profile functions,
^i_(ṽ+2π)=^i_ (m+1)(ṽ)
that bind all the fivebranes together (provided and n_5 are relatively prime) and introduce the fractional moding described above. We can then bundle all the fivebrane profile functions together into a single profile ^I(ṽ) that extends over the range [0,2π).
The key point here is that the Fourier amplitudes a_k^I of the source profile functions
^I(ṽ) = ∑_k α_k^I e^ikṽ/n_5
are coherent state parameters whose expectation values are the mode numbers n_k^I. We thus have a direct map between the labels of 1/2-BPS states and the bulk geometries they correspond to, at the fully non-linear level. In particular, for the orbifold geometries (AdS_3×^3)/_, we know exactly what the fivebranes are doing – the source profile function has only a single mode excited
^++(ṽ) = α^++_ e^iṽ/n_5
with α^++_ = N/, and describes fivebranes spiraling around in the torus parametrized by the T-dual to the AdS_3 azimuthal coordinate σ and the ^3 Euler angle ϕ, and sitting at ρ=0 Martinec:2017ztd. The spiral runs along the (,n_5) cycle of this torus; see figure <ref>.
While supergravity only sees the smeared average of the fivebrane/string source, worldsheet string theory is sensitive to the underlying fivebrane source distribution. The spiraling source distribution of the orbifold states is seen by D-brane probes that end on the fivebranes Martinec:2019wzw. There is a _n_5 structure to the D-branes of the WZW model at level n_5 (see for instance Maldacena:2001ky), and a _ symmetry of fractional brane states on the orbifold Diaconescu:1997br; together these should realize, directly in the R_y→∞ limit, the spiraling structure seen in the somewhat more elaborate construction of boundary states in the gauged WZW model at general R_y given in Martinec:2019wzw.
The CFT dual is often described in the language of the symmetric product orbifold, which pertains to a weak-coupling region of the moduli space. In the symmetric product, 1/2-BPS states are associated to conjugacy classes of the symmetric group, which are labeled by the same data (<ref>), (<ref>), and describe collections of copies (cycles) of the block ^4 CFT that are sewn together by a cyclically twisted boundary condition analogous to (<ref>). Each cycle has a collection of 1/2-BPS ground states labeled by the same data as the polarization labels I carried by the bulk fivebrane excitations.
The BPS states are preserved under the marginal deformation to the strongly-coupled regime of the CFT where the bulk dual has a supergravity approximation. The analysis of Martinec:2020gkv,Martinec:2022okx shows that much of the symmetric product structure survives this deformation. In particular, the effect of 1/2-BPS string vertex operators is to deform perturbatively the string winding condensate carried by the fivebranes, and at the same time the geometry that the condensate is sourcing.
[The aspects of the vertex operator that are responsible for these two effects are related by FZZ duality Giveon:2016dxe,Martinec:2020gkv.]
For instance, a 1/2-BPS graviton vertex operator ^αα̇_j',w_y sews together a number of background strings into a longer string, while changing its polarization state:
(|++⟩_)^2j'+1 ⟶ |αα̇⟩_(2j'+1)+w_y n_5
where |I⟩_k denotes a cycle of length k in polarization state I
(see Martinec:2020gkv,Martinec:2022okx for details).
Exponentiating the vertex operators thus coherently changes the winding condensate carried by the fivebranes as specified by the profile functions ^I(ṽ).
§.§ Deformation back to linear dilaton asymptotics
We have shown that the R_y→∞ endpoint of the marginal line of null gauged WZW models (<ref>) can be described as a shift orbifold of the × theory. The full marginal line should therefore have a description as a marginal deformation of the orbifold. The obvious candidate is a deformation by the bilinear of the L/R currents that generate the shift. To end this section, we show that indeed the null gauged WZW model is the marginal line whose linearized deformation away from the orbifold /_ is by the operator .
[Current-current marginal deformations were explored in Hassan:1992gi,Giveon:1993ph,Gershon:1993wu, and the review Giveon:1994fu.]
The null-gauged WZW model yields the bulk NS5-F1 circular supertube supergravity solution at finite R_y Martinec:2017ztd,Bufalini:2021ndn,Martinec:2022okx
ds^2 = ( -du dv + ds_𝐓^4^2 )
+ n_5[ dρ^2+ dθ^2+1/Σ( cosh^2ρ sin^2θ dϕ^2 + sinh^2ρ cos^2θ dψ^2 )]
.6cm
+ 2/Σ( cos^2θ dy dψ - sin^2θ dt dϕ)
+^2/Σ[ sin^2θ dϕ^2 + cos^2θ dψ^2
+ du dv ],
B = cos^2θ (^2+cosh^2ρ)/Σ dϕ∧ dψ + ^2/n_5Σ dt∧ dy
.6cm
-cos^2θ/Σ dt∧ dψ
+ sin^2θ/Σ dy∧ dϕ ,
u=t+y , v=t-y ,
e^-2Φ = n_1Σ/^2^2 V_4 , Σ = ^2/ + sinh^2ρ + cos^2θ , ≡
in the gauge τ=σ=0;
it has a _ orbifold singularity located along the circle ρ=0,θ=π/2 parametrized by ϕ.
The generalization to the geometries resulting from the generic null vectors (<ref>) may be found for instance in Martinec:2017ztd,Martinec:2018nco,Bufalini:2021ndn, and the CFT dual is described in Giusto:2012yz,Chakrabarty:2015foa.
The limit R_y→∞ of the background (<ref>) is the orbifold (<ref>), as we saw by choosing to gauge fix t,y instead of τ,σ, and noting that a residual discrete gauge symmetry implements the orbifold identification on as discussed in Martinec:2018nco. Backing away from the large R_y limit, null gauging modifies the group sigma model on by a term
/Σ
after integrating out the gauge field. In the gauge t=y=0, this is just a current-current deformation of /_. Indeed, in this gauge the currents , reduce to their components along =×, and Σ^-1∼ n_5/( R_y)^2 is the parameter of the infinitesimal current-current deformation. This result is a special case of the analysis of Forste:2003km relating gauged WZW models of the form (×)/ to current-current deformations of orbifolds of .
A special case of the above orbifolds was considered in Martinec:2001cf,Martinec:2002xq, where a restriction was made to orbifold orders that are divisors of n_5 under the thinking that other choices would be anomalous. We see here that this restriction can be lifted and that the orbifold is always consistent.
§ GENERALIZATION: MOVING DEFECTS
The static conical defect geometries above arise when we diagonalize the action of _ in so that it is an elliptic transformation that fixes the timelike geodesic at the center of AdS_3, ρ=0 in the AdS_3 global coordinates (<ref>).
Elements of lie in one of three conjugacy classes. Realizing group elements g∈ as 2×2 matrices, the conjugacy classes are characterized by the matrix trace
Elliptic : | (g)| < 2 ,
Parabolic : | (g)| = 2 ,
Hyperbolic : | (g)| > 2 .
But group theoretically, we can conjugate a discrete group identification g∼γ_R^ g γ_L^ by any elements h_R,h_L on the left and right to find an equivalent _ identification
g ∼ (h_R^ γ_R^ h_R^-1) g (h_L^-1γ_L^ h_L^ ) .
The conjugated transformations needless to say lie in the same conjugacy class as the originals, of course.
For instance, we can take h_L^ ,h_R^ to be boost transformations in ( hyperbolic group elements).
These do not leave fixed the time translation Killing vector ∂_τ, and so unlike the original orbifold action which was time-translation invariant, the new orbifold identification will be time-dependent. One finds in this way a conical defect moving along a massive particle geodesic, which oscillates and rotates around the center of AdS_3. Such a boosted defect (having γ_L^ =γ_R^, and thus moving purely radially) is depicted in figure <ref>; generalizing to arbitrary γ_L^ γ_R^ results in an oscillating, rotating defect of the sort depicted in figure <ref>.
Note that these states are distinct from the “superstrata” or “microstrata” considered in Bena:2015bea,Bena:2017xbt,Ganchev:2021pgs. In those constructions one is making a coherent excitation by strings that individually carry AdS_3 momentum excitations _-1,_-1 Martinec:2022okx, while here we are making a coherent excitation acting by exp[γ_L _-1 + γ_R _-1] on the whole state.
The inclusion of transformations (γ_L',γ_R') in the conjugation of the orbifold identification has the additional effect of rotating the orientation of the source ring along ^3. The initial defect lies along the ring ρ=0,θ=π/2, and is extended along τ and ϕ. The conjugation rotates the ring onto some other great circle in ^3. For simplicity. we will not consider this possibility further.
The static geodesic in the center of AdS_3 is given by g=e^2π iνξσ_3.
A boosted defect travels along the geodesic
g(ξ) = e^σ_1/2 e^2π iνξσ_3 e^σ_1/2 ,
having classical conserved charges
= N/16π^2[∂_ξ g ∂_ξ g^-1] = N/2ν^2
J^3_ = -iN/4π[g^-1(∂_ξ g)σ_3] = Nνcosh()
J̅^3_ = -iN/4π[(∂_ξ g)g^-1σ_3] = Nνcosh() .
The AdS_3 angular momentum is nonzero when .
The parameter ν determines the deficit angle α and rest mass ℓ M of the defect as
α = 2πν , ℓ M = N/2( ν^2 -1)
where ℓ is the AdS_3 curvature radius and
N = ℓ/4G = c_ST^ /6 = n_1n_5 ,
with c_ST^ the central charge of the dual spacetime CFT and G the 3d Newton constant.
In the worldsheet theory, the boosted defect is simply the quotient of × by a conjugate embedding of _.
Multiple conical defects traveling along geodesics in AdS_3×^3 can be described classically by more general orbifolds, generated by a collection of boosted __i identifications, where i labels the defect.
The set of generating transformations {γ_L,i,γ_R,i}∈ is individually an elliptic transformation making a conical defect. Each defect can be locally BPS, but the supersymmetries are misaligned by the relative boosts and so all supersymmetries are broken. The holonomy around subsets of defects is given by the products of the left and right group elements associated to the members of the subset. The product is always hyperbolic – the holonomy of the BTZ black hole.
[For defects of small deficit angles, which are not global orbifolds of ×, the holonomy around aggregates of conical defects can remain elliptic; but the deficit angle around global orbifold defects is at least π, and we will see that the holonomy around even a pair of such defects is hyperbolic.]
§.§ Non-orbiting defects and non-rotating black holes
Consider for instance two _ defects, boosted by equal and opposite amounts ±η on both left and right. The spatial hypersurface at τ=0 is the Poincaré disk _2, with the defects momentarily at rest at some radius ρ=η. The disk has two wedges cut out with defect angle α; the sides of the wedges are geodesics in _2 passing from the defects to the boundary of the disk, see figure <ref>. As time evolves, the two defects fall toward each other and collide. Beyond this collision point is a region of timelike identification in . This region of closed timelike curves is usually excised from spacetime as being unphysical.
The two defects have identifications under
∼γ_R,i^ γ_L,i^ , ∼γ'_R,i γ'_L,i , i=1,2 ,
where we take (again for simplicity) γ'_R,i=γ'_L,i=e^iπσ_3/.
For both defects having deficit angle α one has
[More generally, one could displace the defect in the direction with azimuthal angle ϕ, in which case one should rotate the group elements to γ_L,R^ → e^-iϕσ_3/2γ_L,R^ e^iϕσ_3/2.]
γ_L,1^ = e^-ησ_1/2 e^+iασ_3/2 e^+ησ_1/2 , γ_R,1^ = e^+ησ_1/2 e^-iασ_3/2 e^-ησ_1/2
γ_L,2^ = e^+ησ_1/2 e^+iασ_3/2 e^-ησ_1/2 , γ_R,2^ = e^-ησ_1/2 e^-iασ_3/2 e^+ησ_1/2 .
The product of these two transformations specifies the holonomy of the connection around the pair of defects. The conjugacy class is given by
( γ_L,1^ γ_L,2 ) = ( γ_R,2^ γ_R,1 ) = 2(cosα cosh^2η-sinh^2η)
Thus two _2 defects (α=π) on top of one another (η=0) make an extremal black hole (the parabolic conjugacy class |Tr(g)|=2); for any finite separation η>0, the holonomy around a pair of _2 defects is in the hyperbolic conjugacy class |Tr(g)|>2 that characterizes a BTZ black hole.
[Exceptionally, one can regard the geometry of a pair of _2 defects as the _2 orbifold of the vacuum BTZ geometry itself, by the reflection that exchanges the two exterior regions; this explains why the defects are always on the horizon at the moment of time reflection symmetry.]
Smaller defects (which cannot arise as global orbifolds) can collide to make either composite conical defects of larger deficit angle, or black holes; defects of larger deficit angle than π (and thus all of the _ defects for >2) always make black holes.
Indeed, the BTZ solution can also be described as a orbifold of generated by the hyperbolic transformation
∼ e^π(r_+-r_-)σ_1 e^π(r_++r_-)σ_1
where r_± are the inner and outer horizon radii relative to the AdS scale ℓ, in terms of which the black hole mass and angular momenta are given by
ℓ M = ℓ (r_+^2+r_-^2)/8G = N/2( r_+^2 + r_-^2)
,
J = ℓ r_+ r_- /4G = N r_+ r_- .
For the non-spinning black hole that forms from a pair of _2 conical defects via (<ref>), one has from equation (<ref>)r_-=0 and r_+=2η/π. Essentially, the defects' separation at the moment of time symmetry becomes kinetic energy when they collide, so the greater the separation on this hypersurface, the greater the mass of the final state black hole. The collision of a pair of _2 defects is depicted in figure <ref>, and that of two _3 defects in figure <ref>.
These backgrounds seem quite bizarre in Lorentz signature – they consist of a past black hole singularity that spontaneously spits out a pair of conical defects, which are then re-absorbed into a future singularity. Here we have perhaps a black hole analogue of all the air in a room rushing to one corner, in the sense that the random chaotic distribution of degrees of freedom that characterizes black hole microstates somehow spontaneously organizes into a tiny coherent corner of phase space (the pair of defects), and then just as quickly devolves into a random chaotic mess again. This process is not visible to the external observer, for whom the entire sequence is hidden behind the event horizon.
[There may be a horizon distortion away from spherical symmetry, since rotational symmetry in σ would seem to broken to the _2 rotation that permutes the defects. This would be seen as a set of quasinormal modes decaying along the future horizon, having emerged from the past horizon in similar fashion.]
The singularity can be characterized as the locus where the transformation (<ref>) becomes null.
[Beyond this surface is a region of timelike identifications that is not usually considered part of spacetime, but is part of the Lorentzian orbifold geometry. For this reason, we will soon pass to Euclidean AdS_3 where the group action is always sensible.]
The past (future) light cone of the intersection of the future (past) singularity and the conformal boundary is the future (past) horizon. The singularity locus on the conformal boundary is a set of fixed points of the identification. For the pair of _2 defects (<ref>), the intersection of the future singularity with the conformal boundary lies at τ=π/2, σ=±π/2 (which are the same point under the orbifold identification), and for the past singularity at τ=-π/2, σ=±π/2.
The work of Steif:1995pq,Birmingham:1999yt,Brill:2007zq,Lindgren:2015fum shows that the defects lie inside an apparent horizon whenever there is a closed geodesic that surrounds them on the surface of time symmetry. The geometry of this surface of time symmetry is the Poincaré disk _2; geodesics are circles that intersect the boundary of this disk orthogonally. The straight line between the two defects in figure <ref> is such a geodesic; thus we see that a path that travels along the upper side of this geodesic from the first defect to the second one, follows the identification and returns slight below the geodesic to the first defect, is a closed path that is a geodesic in the limit that the displacement above and below shrink to zero. Thus the pair of _2 defects lies along a degenerate apparent horizon. In figure <ref>, the red dashed line is a closed geodesic due to the identifications; thus the defects lie within the apparent horizon.
Time-symmetric configurations where the conical defects lie fully inside an apparent horizon arise when we use _ defects with >2 as in <ref>, or when we have more than a pair of _2 defects. For example, one can consider a _p symmetric array of _2 defects on a surface of time symmetry where they are instantaneously all at rest. Such a configuration is depicted in figure <ref> for p=8.
There is of course nothing sacrosanct about such a circularly symmetric array; there are various moduli corresponding to moving the locations of the defects, subject to the constraint that the circular arcs bounding the fundamental domain don't collide. Such a collision would lead to a new conical defect and so a different spatial geometry than was assumed.
§.§ Orbiting defects and rotating black holes
Rotating black holes are obtained when the defects carry AdS_3 angular momentum. This can be achieved either by generalizing the defects from the 1/2-BPS case of section <ref> to the 1/4-BPS or non-supersymmetric examples of section <ref>; alternatively one can coherently spin up a system of multiple 1/2-BPS defects by applying independent left and right boosts to each as in (<ref>).
For the latter choice, there is no longer a moment of time reflection symmetry for the defect trajectories in the _p symmetric array, due to the rotation. There is however still a moment when the defects reach a maximum radius; at that point, they still have an angular velocity, and that breaks the time reversal symmetry.
From (<ref>), the energy and angular momentum of a rotating defect geodesic with left/right boosts η_L,R^ = η_v±η_a are
( J^3_+J̅^3_) = Nνcoshη_v^ coshη_a^ , ( J^3_-J̅^3_) = Nνsinhη_v^ sinhη_a^ .
The geodesic travels a path having
sinh^2 ρ = cos^2(νξ) sinh^2 η_v^ + sin^2(νξ) sinh^2 η_a^ ,
in other words between a minimum radius η_a^ and a maximum radius η_v^. Circular orbits have η_v^ =±η_a^ ( one of , vanishes).
Once again, we can consider a pair of defects orbiting one another. The identifications (<ref>) that make two defects of deficit angle α are
γ_L,1^ = e^-σ_1/2 e^+iασ_3/2 e^+σ_1/2 , γ_R,1^ = e^+σ_1/2 e^-iασ_3/2 e^-σ_1/2
γ_L,2^ = e^+σ_1/2 e^+iασ_3/2 e^-σ_1/2 , γ_R,2^ = e^-σ_1/2 e^-iασ_3/2 e^+σ_1/2
and the conjugacy classes of the holonomies along a curve that surrounds both defects are
| ( γ_L,1^ γ_L,2^ )| = 2(cosα cosh^2-sinh^2)
| ( γ_R,2^ γ_R,1^ )| = 2(cosα cosh^2-sinh^2) .
so that for _2 defects we identify
2 = π(r_+ + r_-)
,
2 = π(r_+ - r_-)
The corresponding geometry is depicted in figure <ref>
There is no longer a point of collision of the two defects as they orbit around one another. Instead, as they approach, the defects are relatively boosted, and the points being identified by a given defect are tilted by the boost so that one is later and one is earlier in the global time coordinate. For a single boosted defect, there are no closed timelike curves under the identification, since one is just boosting a static identification under a spatial rotation. But with multiple, relatively boosted defects, it can happen that in the set of points identified under the boosted rotations and their products, one can find regions of closed timelike curves. These regions are separated from the region where all closed paths under the identifications are spacelike by a locus of null identifications; this latter locus is the “singularity” of the rotating BTZ geometry – not a curvature singularity, but rather a breakdown of causality (though an infinitesimal perturbation leads to a curvature singularity Horowitz:2002mw).
When defects are moving purely radially, or with sufficiently small amounts of orbital angular velocity, the locus of closed timelike curves lies inside the event horizon, and the boundary of this region is usually taken to be the singularity of the rotating BTZ black hole. However, more serious pathologies can occur with the large deficit angles associated to global orbifolds DeDeo:2002yg, in which every point in AdS_3 lies on a closed timelike curve. Consider for instance the pair of orbiting defects in figure <ref>. The colored lines, which indicate the locus of points being identified at a particular proper time along the defect trajectory, are tilted so that one is earlier than the other; see also figure <ref>. Now consider a null geodesic along the conformal boundary traveling in the opposite sense to the defects' rotation. Every time the geodesic passes through a wedge of identification, it gets shifted backward in time. It was shown in DeDeo:2002yg that for circular orbits ( when γ_R^ =0 for both defects), when the sum of the two defect angles equals 2π, this null curve closes on itself; and when the sum of the angles exceeds 2π, there are closed timelike curves passing through every point on the conformal boundary. Thus the entire spacetime is pathological.
What this result means is that there is a limit to the angular velocity of the defects which avoids this pathology. For instance, the pair of _2 defects in figure <ref> must have γ_R,i^ >0 so that they are not quite on circular orbits. For _ defects with >2, one cannot achieve a circular orbit without encountering closed timelike curves at much smaller values of the axial boost.
The physical spacetime is usually taken to be the region outside the locus of closed null/timelike curves, and the region having closed timelike curves is excised. Note that from the point of view of the worldsheet sigma model, this is not allowed under the rules of constructing orbifolds – one must consider the entire group manifold and carry out the quotient by whatever discrete subgroup is being gauged. Perturbative string theory breaks down at the singularity, and one needs the a complete non-perturbative description in order to regularize the singularity and follow the subsequent evolution.
In general, it is complicated to determine the location of the singularity analytically. But when the locus of null identifications reaches the conformal boundary, it has a fixed point. For example, for two _2 defects the fixed points are located at τ=±π/2, σ=±π/2, just as they were for non-orbiting defects, and one recovers the points in figure <ref> where the singularity hits the boundary.
Another option for making systems with AdS_3 angular momentum is to consider the general defects (<ref>) which for , both nonzero carry AdS_3 as well as ^3 angular momentum
h-h̅ = N /^2
,
J_-J̅_= N/ .
We can again take a single such defect and boost it radially as we did above for 1/2-BPS defects to displace it radially on the surface of time symmetry; and again make a _p symmetric array which now carries AdS_3 angular momentum because the individual defects do.
§ EUCLIDEAN CONTINUATION: CONICAL DEFECTS IN
Because the Lorentzian orbifold that describes colliding/orbiting defects has unphysical regions of closed timelike curves, it is highly unlikely that the entire orbifold geometry survives a proper treatment in non-perturbative string theory. While an investigation of the nature of the singularity is needless to say of great interest, we will need more sophisticated tools to do so. The worldsheet formalism is on firmer ground if we can avoid such pathologies, and indeed we can by passing to the Euclidean theory. Here the defects typically avoid one another, and there is a self-consistent perturbative expansion around the orbifold background.
§.§ Euclidean continuation of AdS_3 : the hyperbolic ball
While everywhere we refer to the AdS_3 isometries in terms of the group , it is somewhat more convenient to use an equivalent parametrization of AdS_3 as the SU(1,1) group manifold, as it diagonalizes the global time translation isometry and thus simplifies the analytic continuation to the Euclidean section. Elements of SU(1,1) can be written as
= ( t+iz x-iy
x+iy t-iz )
, [ ] = t^2+z^2-x^2-y^2 = 1 .
Without the determinant constraint, one has ^2,2 with metric ds^2= -dt^2-dz^2+dx^2+dy^2. The constraint is solved in terms of Euler angles
g_ = e^i/2(τ-σ)σ_3 e^ρσ_1 e^i/2(τ+σ)σ_3
= ( e^iτcoshρ e^-iσsinhρ
e^iσsinhρ e^-iτcoshρ ) ,
and the AdS_3 metric in this global parametrization is the induced metric on the constraint surface
ds^2 = [ g^-1dg g^-1dg ]
= dρ^2 + sinh^2ρ dσ^2 - cosh^2ρ dτ^2 .
The trajectory
g_ = e^iνξσ_3
is a geodesic τ = νξ sitting at ρ=0.
Conical defects are made by considering the two surfaces σ=±α/2, cutting out the wedge between them, and identifying the two sides under a discrete rotation by an angle α
g_∼ e^+iασ_3/2 g_ e^-iασ_3/2
that keeps fixed this geodesic.
Conjugation by left and right boosts as in equation (<ref>) moves the geodesic onto a trajectory that oscillates around the center of AdS_3 and deforms the two geodesic surfaces that are identified to made the geometry with a boosted defect.
Euclidean AdS_3 is obtained by Wick rotating z→ -iz so that we now parametrize the diagonal elements in terms of light cone coordinates t± z = ( t+z x-iy
x+iy t-z )
, [ ] = t^2-z^2-x^2-y^2 = 1 ;
the space of these matrices without the constraint is simply ^3,1, with the hyperboloid of timelike, future-directed vectors of fixed length, preserved by Lorentz transformations SO(3,1)≃, which we can parametrize via the Euclidean continuation τ = -i of (<ref>) = ( e^coshρ e^-iσsinhρ
e^iσsinhρ e^-coshρ ) ;
the metric continues to
ds^2 = dρ^2 + sinh^2ρ dσ^2 + cosh^2ρ d^2 .
A vector in ^3,1 is realized here as a Weyl bispinor, and the Lorentz group acts via two commuting copies of complexified on the left and right:
↦ g g^† ,
g = e^i(ω_i - iη^i)σ_i/2 .
The independent left/right actions of the Lorentzian AdS_3 have Wick rotated into these two commuting complexified actions.
The space of the matrices with the constraint (<ref>) is not a group manifold, rather it is the symmetric space = /.
The relation between the independent left and right actions of the Lorentzian section and the pair of complexified actions in the Euclidean section is made manifest by rewriting the left and right actions as vector and axial transformations
g_L^ = e^(iα^3_L σ_3 + α^1_Lσ_1 + α^2_Lσ_2)/2 , α^i_L = α_a^i+α_v^i
g_R^ = e^(iα^3_R σ_3 + α^1_Rσ_1 + α^2_Rσ_2)/2 , α^i_R = α_a^i-α_v^i .
Then the Wick rotation to simply rotates
α^1,2_v ↔ iω^1,2 , α^3_v ↔ω^3
α^1,2_a ↔η^1,2 , α^3_a ↔ -iη^3 .
Translations along σ in the parametrization (<ref>) of Lorentzian AdS_3 are vector transformations by α^3_v, which Wick rotate to the rotations along ω^3 in ; translations along τ are associated to the axial transformations α^3_a, which Wick rotate to the boosts η^3 along in . In the limit ρ→∞, the standard polar coordinates ϑ,φ on the ^2 conformal boundary of are related to ,σ via
φ = σ , cosϑ = tanh .
§.§ Euclidean continuation of conical defects: orbifolds /Γ
The elliptic identification under spatial rotations in AdS_3 that makes a static conical defect thus continues to an elliptic identification in Krasnov:2001va. The left and right boosts (<ref>) that make a radially oscillating defect have =≡η and so are axial transformations that Wick rotate to the same boost η in . Thus the transformations (<ref>) that generate the discrete group of identifications of AdS_3 also define a corresponding discrete group of identifications of .
A simple way of characterizing geodesics, in either or AdS_3, is to work with the unprojected matrices in ^3,1 or ^2,2, before imposing the condition of unit determinant. Geodesics in these Cartesian spaces are straight lines; those with nonzero [] project down onto geodesics in hyperbolic space via =/√([]). We can take a straight line in ^2,2 that represents a geodesic in AdS_3, and Wick rotate it to get a corresponding geodesic in .
For instance, the static geodesic at the center of AdS_3 is the line
t=1
,
z=ξ ,
x=y=0 .
The same line in the Cartesian coordinates on ^3,1 projects down to a geodesic that runs from pole to pole of the boundary sphere running radially through .
Then for ξ=0, in ^2,2 we have a timelike vector at the origin in x,y,z which projects down onto τ=0,ρ=0 in AdS_3, and =0,ρ=0 in . The surfaces τ=0 in AdS_3 and =0 in both have the geometry of the Poincaré disk where we can continue from one to the other; the above geodesics pass through the center of the disk.
The planar surfaces being identified by a rotation are also the same in ^2,2 and ^3,1, and can be parametrized by
t=1
,
z=ξ ,
x+iy = r e^i(ϕ±α/2)
with ξ∈,r∈_+ for a wedge centered about the direction ϕ and identified under a rotation by angle α.
Similarly, the radially boosted geodesic is radially boosted in both Cartesian spaces ^2,2 and ^3,1
t=coshη ,
z=ξ ,
x+i y = e^iϕsinhη
so that at the moment of time reflection symmetry ξ=0, the geodesic is displaced from the origin to ρ=η in both AdS_3 and .
The planes (<ref>) being identified boost to some other planes that are identified under a rotation conjugated by the boost, which are again the same on the surface of time reflection symmetry.
The Euclidean continuation of a pair of oppositely boosted π defects (whose geometry on the surface of time reflection symmetry is depicted in figure <ref>) is shown in figure <ref> (see also figures <ref> and <ref>). Note that the geodesics travelled by the defects are in general circles orthogonally intersecting the spherical boundary of , and the “wedges” being removed are bounded by segments of spheres between two longitude lines (in the orientation where the poles are the locations where the defect hits the boundary); these spherical segments also orthogonally intersect the boundary.
Because the defect location is at a radial extremum on both Euclidean and Lorentzian sections as a function of time, it is at its maximum radius on the Lorentzian section while it is at its minimum radius in the Euclidean section (see figures <ref> and <ref>).
Generalizing to nonzero angular momentum means turning on some amount of “axial boost” in AdS_3 (parametrized by η_a in equation (<ref>)) that leads to a rotation that tilts the trajectory in according to (<ref>). The boost in displaces the radial extremum of the geodesic away from the origin as it did for the non-rotating defect, and the rotation (around the same axis as the boost) tilts it so that it is no longer perpendicular to the equatorial plane; see figure <ref>. The lift of the geodesic analogous to (<ref>) is a straight line, tilted away from running along the ẑ direction by the rotation:
t=coshη_a
,
z = ξcosω ,
x+iy = e^iϕsinhη_a + ξsinω .
This projects down in to a tilted geodesic of the sort depicted in figure <ref>.
The ^2,2 version of this geodesic is the same straight line, parametrized as
t=coshη_a
,
z = ξcoshη_v
,
x+iy = e^iϕsinhη_a + ξsinhη_v .
For orbiting defects, the line is tilted in a plane perpendicular to the radial direction in the unprojected Cartesian space; the tilting is a boost in ^2,2 and a rotation in ^3,1. We demand that the lines representing the geodesic coincide and in particular have the same slope; this relates the boost rapidity η_a in ^2,2 to the rotation angle ω in ^3,1 via
tanhη_v = tanω
The identification of under a rotation by angle α is also tilted into
∼ γγ^† , γ = e^i/2(ω - iη_a)σ_1 e^-i/2ασ_3 e^-i/2(ω - iη_a)σ_1 .
The boosted wedge being cut out of Lorentzian AdS_3, whose sides are identified by a boosted rotation that fixes the boosted defect geodesic, continues into a spherical wedge being cut out from , with the sides being identified by an elliptic transformation in which fixes the circular geodesic in that is the analytic continuation of the Lorentzian defect trajectory.
Note that the appropriate time coordinate for the Wick rotation can no longer be taken to be the global time on the covering space. The problem is that there is no simple hypersurface of the Lorentzian geometry that contains all of the wedges being cut out of the full 3d geometry (the identifications involve some timelike motion due to the way that the geodesics are tilted), and no way of simply matching to a corresponding hypersurface of the Euclidean geometry with identical wedges cut out. The appropriate parametrization of the geometry co-rotates with the defects.
§.§ Defect correlators
We thus have a prescription for the Euclidean continuation of a collection of conical defects. The Euclidean geometry is an orbifold of generated by a collection of elliptic elements {γ_L,i^ ,γ_R,i^ = γ_L,i^†} implementing __i twists.
In contrast to the general 1/2-BPS state, the conical defects here are special – their Euclidean versions locally near the defect are _ quotients of ×^3, and the bulk geometry is globally a quotient
( ×^3 ) / Γ ,
where Γ is the discrete subgroup of generated by the collection of elliptic identifications for the defects. Each conical defect travels a geodesic (circular arc) in , landing on the conformal ^2 boundary at a pair of conjugate points z_i, z_i'. In the non-orbiting case, these two points are symmetric about the equator, z_i'=z_i^*, in order to have a surface of time reflection symmetry; though in general ( for the orbiting case, and more generally for generic correlators of heavy operators) one can place the fixed points of the elliptic transformations independently on the sphere.
In the dual CFT, each conical defect corresponds to a conjugate pair of local operators inserted at the corresponding points z_i and z_i'.
The holographic map for these operators is precisely understood. As reviewed above, for the 1/2-BPS _ conical defects, these operators locally transform the vacuum state, which is a condensate of strings with the lowest unit of winding 1/n_5, into a condensate of strings of winding /n_5. The fivebranes back-react by assuming a spiral shape at the source.
In ()^N/S_N symmetric orbifold terms,
the _ conical defect is associated to a particular orbifold twist operator where all the cycles have length . These operators create states in the R-R sector of the CFT, and carry spacetime quantum numbers (<ref>).
In string theory one uses such operators, having conformal dimension a finite fraction of the central charge c_ST^ =6N, to construct a “heavy” background – a state macroscopically excited away from the spacetime CFT vacuum – and then worldsheet string theory calculates the correlation functions of perturbative excitations around this state. In the spacetime CFT, these are correlation functions with two “heavy” operator insertions as well as some number of “light” operator insertions (whose conformal dimension h≪ c_ST^). Correlation functions involving a single heavy defect have been analyzed in Bufalini:2022wzu.
The geometries we have built above are the leading semi-classical approximation (in 1/N) to spacetime CFT correlation functions of several heavy operators (one conjugate pair for each defect). Furthermore, these heavy backgrounds are exact solutions to classical string theory, non-perturbatively in α'. We saw this in section <ref> for a single conical defect, where the single _ identification could be diagonalized, and the U(1) factor in the CFT where the orbifold acts as a shift symmetry could be isolated. Correlation functions of the orbifold are those of the cosets / and / times free field vertex operators on a modified lattice of windings and momenta resulting from the shift orbifold.
For multiple defects, the __i generators of the various defects are not simultaneously diagonalizable, and so one has a non-abelian orbifold group – an infinite discrete subgroup of ×_L×_R known as a Kleinian group. If we consider orbiting, 1/4-BPS, or non-supersymmetric conical defects, the orbifold is asymmetric.
To compute worldsheet amplitudes in the presence of multiple defects, we might look for a suitable generalization of the methods developed in Kutasov:1999xu,Maldacena:2000kv,Maldacena:2001km,Teschner:1999ug,Teschner:2001gi,Ponsot:2002cp,Hikida:2007tq,Dei:2021xgh,Dei:2021yom,Dei:2022pkr,Bufalini:2022toj,Ashok:2020dnc,Nippanikar:2021skr,Ashok:2022vdz in order to build string vertex operators invariant under the group Γ. There, a central role is played by the coherent state vertex operator Φ_j(x,x̅) built from a somewhat different matrix parametrization of =
( 1 0
γ 1 )
(
e^ϕ 0
0 e^-ϕ)
( 1 γ̅
0 1 )
=
(
e^ϕ e^ϕγ̅
e^ϕγ e^-ϕ + e^ϕγγ̅) ,
on which g∈ again acts via → g g^†. The functions
Φ_j(x,x̅) = 2j-1/π((x,1)··(x̅
1))^-2j
= 2j-1/π( |γ-x|^2e^ϕ + e^-ϕ)^-2j
are eigenfunctions of the Laplacian on _3^+. The complex parameter x labels points on the boundary. The operator Φ_j(x,x̅) transforms as a tensor of weight (j,j) under . Formally, one can construct untwisted sector vertex operators on the orbifold by summing this operator over its images under the orbifold group γ(x), γ∈Γ, appropriately regulating the infinite sum involved if necessary.
As shown in Martinec:2018nco,Martinec:2022okx, for a single defect there are untwisted sector vertex operators describing supergravity excitations around the defect; for the multi-defect geometries with sufficiently sharp conical defects (thus having large redshift to the orbifold point), or sufficiently well-separated defects, the untwisted sector operators will be fairly well localized around an individual defect, with tails that overlap the other defects. As usual for hyperbolic manifolds, the spectrum of the Laplacian will be chaotic.
There will be in addition twisted sector vertex operators that deform the winding condensate associated to the defect. One expects such winding operators for each defect when several are present, and more generally for each element γ∈Γ; there are winding strings that only close up to the action of γ. It would be interesting to understand their properties, as they describe perturbations of the Coulomb branch tails of the little string condensate carried by the underlying fivebranes.
Of particular interest is the pair of _2 defects discussed above. As we turn off the boost η that separates their insertion points on the conformal boundary, we approach the OPE limit of the correlator, see figure <ref>.
The leading term in the OPE of two 1/2-BPS _2 R-R defect operators with h=h̅=N/4 and J=J̅=N/4 is a 1/2-BPS NS-NS defect operator with h=h̅=N/2 and J=J̅=N/2; the associated state is an extremal BTZ black hole. Indeed, in the construction of section <ref>, the limit where we send the relative boost η to zero, the two defects touch; the closed geodesic on the surface of time symmetry (the red dashed line in figure <ref>) shrinks to zero size and the area of the apparent horizon vanishes.
[This may be one instance where the possibility of conjugation in mentioned briefly in section <ref> could be of use – we can regularize the OPE in by separating the two _2 defect rings in ^3. ]
An intriguing weak-coupling counterpart to this process involves the operator product of the corresponding defect operators in the symmetric orbifold. These are labelled by the conjugacy class of the symmetric group consisting of N/2 disjoint transpositions (2-cycles), which we denote (2)^N/2. The product of two such twists decomposes on a variety of conjugacy classes, but at large N these are all concentrated on classes consisting of a handful of cycles whose lengths are of order N together with a smattering of shorter cycles.
Heuristically, the reason is as follows. Each twist operator is a sum over all permutations within the conjugacy class (2)^N/2. Their product thus contains many terms; consider some random term in the sum. Using an overall relabelling of letters, we can take the first permutation to be P_1=(12)(34)… (N-1,N), and the second to be P_2=(σ_1σ_2)(σ_3σ_4)…(σ_N-1σ_N). At the first step, we take the first transposition (12) of P_1 and ask where P_2 sends the letter 2. It is either 1, and the cycle length in the product is one, or it is some other letter ℓ; since there are N-2 chances that it is not 1, it is vastly more likely that the cycle length is at least two. One then asks where P_1 sends ℓ, call it ℓ', and one asks where P_2 sends ℓ'; again it is either 1 or (much more likely) some other letter ℓ”, and so on until one eventually closes the cycle. The odds are that one will have to wait some finite fraction of the (N/2) transpositions until one manages to hit on the one that closes the cycle.
Once one has closed the cycle, it will generically be before we have run through all the transpositions in P_1, so we begin anew constructing another independent cycle in the product; and so on, until one has used up all the letters. Thus while there will generically be a handful of long cycles taking up most of the letters, once one is down to the last few transpositions in this process, the cycles are short and this is why the generic element in the product has a mix of both long and short cycles.
Indeed, a random sampling of 10^5 examples of permutations in the class (2)^N/2 (with N=10^4) yields a statistical distribution of cycle lengths for their product (shown in figure <ref>), and distribution weighted by the number of copies of in the cycle (shown in figure <ref>). In this example, cycle lengths in the product always come in pairs – the cycle in the product containing the letter 2ℓ-1 in P_1 and the cycle in the product containing the letter 2ℓ in P_1 are always disjoint and have the same length. This is why in the figure, the cycle lengths cannot exceed N/2.
This is just what one would expect for the CFT state describing an extremal black hole – the resulting state is in the Hagedorn phase of the little string.
[Which in the case of the symmetric orbifold is the same as the fundamental string, since this CFT lies in the cusp of the moduli space that describes n_1=N fundamental strings bound to a single fivebrane n_5=1 Seiberg:1999xz,Larsen:1999uk, and thus there is no fractionation phenomenon. There are multiple cusps of the moduli space, one for each factorization of N into (n_1,n_5). The cusps with n_5>1 are in the strong-coupling regime of the spacetime CFT.] Since the OPE is BPS at leading order in the separation of the operators (and therefore not renormalized deBoer:2008ss,Baggio:2012rr), this picture is robust as we go from weak to strong coupling in the CFT.
[In the family of supertube backgrounds, one can approach the black hole threshold from below as one moves along the configuration space of 1/2-BPS supertubes at the BPS bound toward the point where it intersects the extremal black hole threshold Martinec:2019wzw; here one is approaching the extremal black hole threshold from above as one merges the two _2 defects.]
§ DISCUSSION
§.§ Generalizations
While we used _ orbifold defects because of their exact worldsheet description in perturbative string theory, in principle one might hope to construct the supergravity solution for the Euclidean geometry with defects corresponding to any of the Lunin-Mathur geometries as local sources. These would be multicenter “bubbled geometries” where each center is locally some 1/2-BPS wiggly fivebrane source whose back-reaction in supergravity was worked out in Lunin:2001fv for a single source.
Again, the relative boost of the sources breaks supersymmetry mildly, but one might hope that some of the techniques recently developed for constructing non-BPS supergravity solutions in AdS_3 Ganchev:2021pgs might be of use.
The 1/2-BPS geometries have a spread of asymptotic deficit angles ranging from zero (global AdS_3) to 2π (the extremal black hole), controlled by the shape of the profile functions ^I(ṽ).
These geometries are not pure conical defects however, except for the _ examples discussed above Lunin:2002iz, instead there are typically long-range tails in the metric and other supergravity fields; but one can for instance describe giant graviton states and shockwaves in AdS_3 Lunin:2002bj,Lunin:2002iz,Lunin:2002fw,Chakrabarty:2021sff, and it would be interesting to investigate their collisions given the extensive holographic dictionary regarding these states that has been built up over the years (see Bena:2022rna for a review).
The geometries are asymptotically conical around each 1/2-BPS state, however, and one may use the metric holonomies to characterize the results of their collisions. For sufficiently light 1/2-BPS states with not too much kinetic energy, the collision simply makes a state with slightly larger asymptotic conical defect, as the product of the corresponding group elements remains in the elliptic conjugacy class Matschull:1998rv,Holst:1999tc,Birmingham:1999yt,Krasnov:2002rn,Brill:2007zq,Lindgren:2015fum. Eventually the state settles down and thermalizes. One can tune the final state defect to be near the black hole threshold, either slightly below or slightly above, by tuning the asymptotic deficit angle and the relative radial boost of the defects; it would be interesting to see what phenomena arise in the bulk that characterize the Hawking-Page transition to the black hole phase. Arguments were given in Martinec:2019wzw that in the F1-NS5 system, this transition deconfines the non-abelian excitations of the little string; one might hope to find evidence for this phenomenon through the study of defect collisions, and in particular the OPE limit discussed above in the collision of two _2 defects to make a near-extremal BTZ black hole. The rotating BTZ threshold, where the energy and AdS_3 angular momentum become equal, is also of interest. Here the defects don't collide; they can remain macroscopically separated, but generate a null singularity inside an apparent horizon.
Another direction would be to generalize from 1/2-BPS supertube defects to 1/4-BPS superstrata traveling approximately along geodesics in . In particular, there are superstrata with long AdS_2 throats Bena:2017xbt; it would be interesting to investigate their collisions within a larger asymptotically AdS_3 arena, watching the throats coalesce and form an AdS_2 black hole from some nonsingular initial data. Mergers of similar throats has been discussed in the BPS and near-BPS context in Bena:2006kb,Martinec:2015pfa.
§.§ The singularity
The weak-coupling picture of the _2 defect collision to make a near-extremal BTZ black hole presented above seems to hold lessons more generally. In this regime, the defects are associated to twist operators in the conjugacy class (2)^N/2 in the symmetric orbifold; the Euclidean continuation of their near-extremal collision is the OPE limit of the four-point function where the two R-R defect operators make a near-BPS pure state in the NS-NS sector that is (a) at the black hole threshold, and (b) concentrated on Hagedorn-like conjugacy classes in the symmetric product.
In the string theory description of n_5 NS5-branes, the effect of 1/2-BPS string vertex operators is to change the winding condensate carried by the fivebranes as in equation (<ref>) Martinec:2020gkv,Martinec:2022okx. Exponentiating the vertex operator ^++_j'=1/2,w_y=0 turns the AdS_3 vacuum into the _2 orbifold defect, as we see from the identification of the coherent profile (<ref>) it produces (with =2) in the Lunin-Mathur construction. Since the BPS OPE is a protected quantity deBoer:2008ss, the above numerical evaluation of the OPE is an accurate picture of what is happening at strong coupling.
[It would be interesting to see if some of the methods of Lin:2022rzw,Lin:2022zxd could be adapted to the present context.]
We see that the collision of two such string condensates makes a Hagedorn string gas at the black hole threshold. Though the description of this process in the bulk is of course beyond string perturbation theory, it is a direct extrapolation of perturbative processes where we see several shorter strings combine to make longer strings as in (<ref>).
As we move away from the BPS limit and above the BTZ black hole threshold (keeping the AdS_3 angular momentum equal to zero), the defects move apart on the surface of time reflection symmetry in and AdS_3; they then collide more violently in AdS_3 at a somewhat later time, starting from rest at this larger separation. It is tempting to regard the singularity as the locus where the string condensates associated to the defects collide and rearrange to make a more excited Hagedorn gas of little strings than the one that arises at extremality.
This is just the sort of effect that one expects to resolve the singularity in string theory.
A major question is whether there are precursors of this collision already at the scale of the apparent horizon, which should then also be visible in the Euclidean correlators directly (since the apparent horizon is a feature of the geometries of both and AdS_3 at the surface of time symmetry where they agree). This is the question of where the transition to the Hagedorn phase is happening in the bulk (to the extent that it can be localized). The phenomenon of longitudinal spreading of perturbative strings in high-energy scattering is well-known Gross:1987kza,Gross:1987ar,Amati:1987wq,Amati:1988tn,Susskind:1993aa,Lowe:1995ac,Polchinski:1995ta,Giddings:2007bw,Dodelson:2015uoa,Dodelson:2015toa,Dodelson:2017hyu. One way of phrasing the question here is whether a similar phenomenon governs little string scattering.
It might also be interesting to revisit the calculation of scattering in Lorentzian orbifolds Liu:2002ft,Liu:2002kb to see the effects of winding sectors, the spread of the string wavefunction, and so on. Even though the geometries aren't fully consistent as string backgrounds, they may contain clues about the non-perturbative dynamics. Furthermore, one can study the Euclidean theory, which is perfectly regular, and its properties under analytic continuation to the Lorentzian section.
The fluctuations of 1/2-BPS F1-NS5 configurations have been studied in Alday:2006nd,Raju:2018xue, by quantizing the restricted phase space of the 1/2-BPS supergravity solutions themselves using the parametrization in terms of the profile functions ^I(ṽ) of section <ref>. It was found that the fluctuations are larger than would be expected on the basis of considerations of an extremal black hole with a “stretched horizon”. While the ensemble includes only a tiny subset of the full fivebrane degrees of freedom, and is working in a regime below the black hole threshold where non-abelian fivebrane dynamics hasn't fully set in, these results may be an indication that indeed the extent of the fivebrane wavefunction exceeds what one might have expected on the basis of properties of the classical geometry such as local curvature invariants, .
Orbiting defects generically do not collide, at least initially; instead, the orbifold identification goes null, and then timelike in the classical solution. Small perturbations lead to singularities Liu:2002kb,Horowitz:2002mw.
Three-charge extremal black holes can be constructed using two _2 defects as in section <ref>, and performing mostly left-moving boosts , sending → 0^+. This separates them to radius ρ=, and then boosts them in the AdS_3 azimuthal direction by an amount approaching , so that the orbit becomes circular. On the CFT side, we are coherently exciting the two defect operators with an exponential of L_-1 but not of L̅_-1. At τ=0, the two defects lie on the horizon, as one can see from figure <ref>. Again, these geometries are close to BPS, and describe a pure state in the ensemble of such black holes. It would be interesting to investigate the properties of the CFT operator product expansion in this limit, and see how properties of the bulk geometry are reflected in its structure.
§.§ Dustballs and fuzzballs
One can construct arbitrarily long “bag of gold” spatial geometries by considering concentric rings of defects. Basically, the defects insert positive curvature in the geometry, trying to close it off, while the negative vacuum curvature in between the defects makes the geometry expand. By adjusting the radii of the rings we can make these two effects balance out, and then add another concentric ring, and another, ad infinitum. Figure <ref> shows the first two circular defect arrays (on the surface of time reflection symmetry) in such a “bag of gold” geometry (in this case with p=3 and =2). We have tiled the Poincaré disk with hyperbolic triangles to make the hyperbolic geometry easier to visualize. Because all hyperbolic triangles in the figure are isomorphic, both the solid red and dashed red lines are closed geodesics of the same length; the two curvature effects are precisely in balance. The tubular region between the dashed and solid red geodesics (containing a ring of three defects) is a “bag of gold” building block . We can then cut the geometry apart along the outermost (solid red) geodesic, and insert as many copies of as we want, glued end to end, to make the bag of gold depicted in figure <ref>. Again the location of each defect is a modulus which can be varied over some domain within the “bag of gold”. As we vary the locations of the defects, the lengths of the closed geodesics (and hence the areas of the various apparent horizons) increase or decrease in some range.
Clearly there is an infinite variety of such bag-of-gold geometries that can be constructed, by varying the type, number and placement of the defects. A variant of the above construction has been employed in Balasubramanian:2022gmo,Balasubramanian:2022lnw to argue that such states can be taken as a coherent state basis for black hole microstates. As mentioned above, these configurations are extremely atypical, the equivalent of using a basis of microstates of a gas in a box where one has put all the molecules into a tiny corner of their phase space, and then relied on the fact of their thermalization to claim that one has captured the generic microstate. Classically, this may be true, because of the infinite precision with which one can specify classical trajectories. Quantum mechanically, working in a tiny corner of phase space is not sampling the generic microstate. The approach of Balasubramanian:2022gmo,Balasubramanian:2022lnw is to produce enough such outlandish configurations that one may argue that one has indeed sampled the phase space.
Perhaps it is more accurate to say that one has sampled the phase space of geometrical configurations on the surface of time reflection symmetry (for non-rotating black holes). All these configurations evolve rapidly (in a proper time of order the AdS scale ℓ) to a stringy regime where the singularity of classical general relativity is expected to be resolved by strongly-coupled string theory. A more accurate picture of the quantum wavefunction might therefore be that the wavefunction has two regimes or branches, that one might label geometric and non-geometric (or in the terminology of Martinec:2019wzw, following Bena:2012hf,Lee:2012sc,Martinec:2015pfa, “Coulomb” and “Higgs”), and that the geometrical construction of Balasubramanian:2022gmo,Balasubramanian:2022lnw captures the geometrical or “Coulomb” branch. Something similar is seen in the Euclidean “cigar” geometry of the / CFT that is the worldsheet description of Euclidean black fivebranes. There the background is that of a geometrical cigar with metric
ds^2 = n_5^2( dρ^2 + tanh^2ρ d^2 )
that is inextricably linked to a string winding condensate
⟨⟩∝exp[-n_5(ρ+i)]
that was interpreted in Giveon:2015cma,Giveon:2016dxe in terms of the vacuum string wavefunction again having two branches. Moreover, the winding condensate of the Euclidean theory has been suggested (see Kutasov:2000jp) to analytically continue to a Hagedorn state of fundamental strings near the horizon in Lorentz signature (since the local Unruh temperature a string length from the Horizon is the string scale).
It might be that a similar structure characterizes the non-perturbative wavefunction of the BTZ black hole in string theory – that there is a geometrical branch of the wavefunction that includes all the “bag-of-gold” geometries described above, and a stringy branch supported on Hagedorn configurations of the little string that opens up near the singularity of the classical geometry, but also extends out to the horizon scale in order to resolve the information paradox. The perturbative string condensate seen in the above coset is also a property of the parent worldsheet theory on the Euclidean BTZ background /, with the same candidate interpretation. One might say that the Hagedorn structure of the perturbative string near the horizon is simply the “Coulomb branch” tail of the Hagedorn little string wavefunction that governs the “Higgs branch” and the statistical physics underlying BTZ black hole thermodynamics.
While we have described the Euclidean evolution as preparing a state at the surface of time symmetry, that then further evolves as a black hole and develops a singularity of the effective geometry, it is perhaps more accurate to say that the singularity is already there in the state, and that the “Coulomb” and “Higgs” aspects of the wavefunction are both present. There is no invariant bulk notion of time, and therefore of spatial slicing of the geometry; any surface anchored to a given time on the conformal boundary is equivalent, due to Hamiltonian constraints in the bulk gravity theory. In any invariant characterization of the state, the wavefunction will contain both the black hole geometry, and the resolution of its singularity via a Hagedorn gas of little strings.
The existence of a pristine geometrical black hole interior, largely decoupled from the little string dynamics, cannot persist for long after the black hole forms; very likely it is a transient phenomenon – otherwise one has the Hawking process going on at the horizon, and the attendant paradoxes. Indeed, unless there are approximate conservation laws that sequester the geometrical collective modes from the underlying chaos of black hole dynamics, one expects these modes to mix strongly with the little string degrees of freedom on time scales of order the scrambling time τ_ scr^ ∼β/2πlog(S-S_0). After that, any geometrical picture of the interior should receive substantial modifications.
How then should we think of the flow of time, and causal relations in the black hole interior? This is crucial for the resolution of the information paradox. One thing that string theory has, that general relativity does not, is the underlying fivebrane dynamics, about which the perturbative string is giving us substantial evidence. Perturbative strings see a causal structure which is that of the effective geometry with its horizon as an effective causal barrier. If string theory resolves the information paradox, it may be because there is a separate clock provided by the underlying brane dynamics, and evolution in that clock maintains coherence of the wavefunction between the singularity and the horizon.
For instance, the redshift to the horizon of a near-extremal black hole is not infinite, as the classical geometry suggests; rather, there is a finite depth to the throat, and a finite redshift, after quantum fluctuations of the geometry are taken into account Heydeman:2020hhw,Lin:2022rzw,Lin:2022zxd. Quantization of the modes captured by JT gravity results in a gap in the spectrum of order 1/N between the BPS ground states and the lowest excited states, indicating a maximum redshift in the geometry consistent with the weak-coupling, Hagedorn little string picture of the state space. This result indeed suggests the absence of a causal horizon, and an IR dynamics governed by the internal clock of the fivebranes.
In this picture of the black hole, objects in the geometrical regime are swept into the interior where they fractionate into little strings. The Hagedorn little string gas, coherent at the horizon scale, boils off Hawking quanta that unitarily radiate information back out into the geometrical regime. Outside the horizon, the little strings are effectively confined; only the singlet modes – fundamental strings – are light excitations here, and one has the conventional geometry and dynamics of the black hole exterior.
The above scenario shares with recently proposed “wormhole dynamics” scenarios, or “ER=EPR” proposals (see Almheiri:2020cfm for a review), the idea that there are alternative pathways for information flow, beyond what one sees in the classical black hole geometry. The difference here is that there is no role for the conventional process of pair creation of Hawking quanta at the horizon. Instead, the Hawking quanta emerge at the horizon from the Hagedorn gas of little strings.
A particularly vivid example of this phenomenon is provided by the Hawking radiation of wound strings from the BTZ black hole Martinec:2023plo. Here, one is radiating strings which contribute to the background F1 charge, drawing them from the electric H_3 flux carried by the fivebranes. In the black hole phase, the string winding charge is the winding charge of little strings (each unit of fundamental string winding fragments into n_5 little string windings, so that the total winding of the latter is N=n_1n_5). A little string gas that correctly accounts for the entropy will also correctly account for the radiation rate since, as shown in Martinec:2023plo,Martinec:2023iaf, the emission probability is entirely governed by the available phase space.
Unitarity of the evaporation process requires that the string being radiated be reconstituted from the underlying little string gas; but that string cannot have tunneled to the horizon from the singularity of the classical BTZ geometry; that gives the wrong amplitude, and thus the wrong thermodynamics. The radiated string also cannot come from the Hawking process, which generates a negative charge cloud near the horizon from vacuum pair creation, that would be detected by the Gauss law, and leaves unmodified the original charge near the singularity, also detected by the Gauss law. A geometrical picture of the interior would have these two charge distributions well-separated from one another, leading to the standard information paradox problem of the radiated strings carrying away no information about the initial black hole state.
In the version of the fuzzball scenario presented here, the string emission instead proceeds via n_5 little string windings coalescing from the little string branch of the wavefunction and depositing a fundamental string onto the geometrical branch of the wavefunction.
§ ACKNOWLEDGEMENTS
I thank
Samir Mathur
and
David Turton for discussions.
The work of EJM is supported in part by DOE grant DE-SC0009924.
JHEP
§.§ Euclidean continuation of defect geometries
The Euclidean analytic continuation of AdS_3 is the hyperbolic ball , whose conformal boundary is ^2. The Euler angle coordinates (<ref>) provide global coordinates for AdS_3, in which the metric is
ds^2 = dρ^2 + sinh^2ρ dσ^2 - cosh^2ρ dτ^2 ;
the Wick rotation sets =iτ, so that the metric becomes
ds^2 = dρ^2 + sinh^2ρ dσ^2 + cosh^2ρ dτ^2 ;
The × isometries of AdS_3 analytically continue to the isometry of =/.
The hypersurface =0 of the Euclidean section can be matched to the hypersurface τ=0 of time reflection symmetry of boosted defects in the Lorentzian section, such as those in figure <ref>; both hypersurfaces are quotients of the Poincaré disk. The structure of conical defects generated by elliptic elements of × on the Lorentzian side becomes a collection of conical defects generated by elliptic elements of .
These identifications extend away from the hypersurface τ=0 as follows.
Start with the geodesic ρ=0 that runs straight through from the south pole of the boundary sphere to the north pole, parametrized by ; this is the Euclidean continuation of the geodesic that sits in the middle of AdS_3 at ρ=0, parametrized by τ. Elliptic elements fixing this geodesic are translations along σ both in Lorentzian AdS_3 and Euclidean , and thus the Euclidean continuation of the static conical defects such as that depicted in figure <ref> cut out a wedge from the solid ball between surfaces of fixed longitude and identify opposite sides under the rotation along σ. The class of elliptic elements of interest to describe radially boosted geodesics are conjugate to this canonical presentation and fix the geodesic that sits at fixed ρ,σ and is parametrized by , removing the same wedge from each Poincaré disk = const.. On the Poincaré disk _2 one removes the wedge between two geodesics that pass through the fixed point, meeting at a relative angle equal to the defect angle. In the version, maps hemispheres to hemispheres; an elliptic element rotates one hemisphere into another, and the two hemispheres intersect along a geodesic. One excises the region between the two hemispheres to make the defect; see figure ???. ]
|
http://arxiv.org/abs/2307.01466v1
|
20230704040957
|
A lower bound for p_c in range-R bond percolation in four, five and six dimensions
|
[
"Jieliang Hong"
] |
math.PR
|
[
"math.PR",
"60K35, 60J80, 60J68"
] |
A lower bound for p_c in range-R bond percolation in four, five and six dimensions
Jieliang Hong
Department of Mathematics, Southern University of Science and Technology,
Shenzhen, China
E-mail: [email protected]
============================================================================================================================================
For the range-R bond percolation in d=4,5,6, we obtain a lower bound for the critical probability p_c for R large, agreeing with the conjectured asymptotics and thus complementing the corresponding results of Van der Hofstad-Sakai <cit.> for d>6, and Frei-Perkins <cit.>, Hong <cit.> for d≤ 3. The proof follows by showing the extinction of the associated SIR epidemic model and introducing a self-avoiding branching random walk where births onto visited sites are suppressed and the total range of which dominates that of the SIR epidemic process.
§ INTRODUCTION AND THE MAIN RESULT
Set R∈. The range-R bond percolation takes place on the scaled integer lattice _R^d=^d/R={x/R: x∈^d}, which is equivalent to the Bernoulli bond percolation on ^d where bonds are allowed to form over a long range when R is large. Such range-R bond percolation dates back at least to the “spread-out” model considered in Hara and Slade <cit.>. It can be used to model the spread of the disease in a large population when the range of infection can be very long, in particular, due to the increased interactions and the more frequent communications within the population. Let x, y∈^d_R be neighbours if 0<x-y_∞≤ 1 where ·_∞ denotes the l^∞ norm on ^d, and write x∼ y if x,y are neighbours. Let (x) be the set of neighbours of x and denote its size by
V(R):=|(x)|=|{y∈^d_R: 0<y-x_∞≤ 1}|=(2R+1)^d-1.
Here |S| stands for the cardinality of a finite set S. Now as usual in Bernoulli bond percolation, we include the edge (x,y) for any two neighbors x∼ y with some probability p>0, independently of all other edges. Denote by G=G_R the resulting subgraph with vertex set _R^d and edge set is the set of open edges. Define (0) to be the cluster in G containing 0. The critical probability p_c is then given by
p_c=p_c(R)=inf{p: _p(|_0|=∞)>0}.
We are interested in finding the asymptotic behavior of p_c(R) as R→∞.
Write f(R)∼ g(R) as R→∞ if lim_R→∞ f(R)/g(R)=1. It was first shown in Penrose <cit.> that
p_c(R) ∼1/V(R) as R→∞, in d≥ 2,
which is analogous to the results of Kesten <cit.> for the nearest neighbour percolation on ^d when d→∞. Later in higher dimensions d> 6, Van der Hofstad and Sakai <cit.> use lace expansion to get finer asymptotics on p_c(R):
p_c(R)V(R)-1 ∼θ_d/R^d,
where θ_d is given in terms of a probability concerning random walk with uniform steps on [-1,1]^d. See (<ref>) below for the explicit expression of θ_d.
The extension of (<ref>) to d=6,5 has been conjectured by the two authors in <cit.> while in dimension d=4, it has been conjectured by Edwin Perkins and Xinghua Zheng [private communication] that
p_c(R)V(R)-1 ∼θ_4 log R/R^4 in d=4.
They also conjecture the constant θ_4 to be 9/(2π^2), agreeing with our result below. In lower dimensions d=2,3, Frei-Perkins <cit.> and Hong <cit.> give respectively the lower and upper bound for p_c that suggests the correct asymptotics for p_c(R)V(R)-1 should be
p_c(R)V(R)-1∼θ_d/R^d-1 in d=3,2
for some constant θ_d>0 that depends on the dimension. They use in particular the ideas from branching random walk (BRW) or superprocess to study the range-R bond percolation. Moreover, the fact that the local time of super-Brownian motion exists when d≤ 3 is essential in <cit.>, allowing the author to apply the theory of superprocess to study the asymptotics of p_c(R). However, the local time does not exist in d≥ 4, so new tools will be needed.
In this paper, we adapt the methods from Frei-Perkins <cit.> for the SIR epidemic process and the ideas from Durrett-Perkins <cit.> for the contact process to study percolation in the intermediate dimensions 4≤ d≤ 6, and obtain the lower bound for p_c(R) that matches the conjectured asymptotics as in (<ref>) for d=5,6 and (<ref>) for d=4.
Let Y_1,Y_2, ⋯ be i.i.d. random variables on ^d so that Y_1 is uniform on [-1,1]^d. Set U_n=Y_1+⋯+Y_n for n≥ 1. When d≥ 5, define
b_d=2^-d∑_n=1^∞∑_k=n^2n(U_k∈ [-1,1]^d)=2^-d∑_k=1^∞[k+2/2] (U_k∈ [-1,1]^d),
where [x] is the largest integer that is less than or equal to x. In d=4, we let b_4=9/(2π^2).
Let 4≤ d≤ 6. For any θ<b_d, there exists some constant c(θ)>0 so that for any positive integer R>c(θ), we have
p_c(R)V(R)-1≥θlog R/R^4, in d=4;
θ/R^d, in d=5,6.
In particular, the above implies
lim inf_R→∞[p_c(R)V(R)-1]R^4/log R≥ b_4, in d=4,
and
lim inf_R→∞[p_c(R)V(R)-1] R^d ≥ b_d, in d=5,6.
(a) As will be justified later in Section <ref>, our methods for proving the lower bound are believed to be sharp for d=4, so we conjecture that the limit as in (<ref>) exists, and equals b_4, i.e.
lim_R→∞[p_c(R)V(R)-1]R^4/log R=b_4, in d=4.
We were informed by Edwin Perkins that the constant b_4=9/(2π^2) had been conjectured by him and Xinghua Zheng.
(b) In d=5,6, Van der Hofstad-Sakai <cit.> conjecture that the limit as in (<ref>) exists and equals (see (1.18) of <cit.> with U^⋆ (n+1)=2^-d(U_n∈ [-1,1]^d))
b_d=2^-d(U_1∈ [-1,1]^d)+2^-d∑_k=2^∞k+2/2(U_k∈ [-1,1]^d).
One can easily check
b_d- b_d=2^-d∑_k=1^∞1/2(U_2k+1∈ [-1,1]^d)>0,
hence our results partially confirm their conjectures. It would be desirable to upgrade the lower bound from b_d to b_d, the gap between which we think comes from, say, using the language of branching random walk, the collisions when two or more particles are sent to the same location at the same time. See more discussions below in Remark <ref>.
From now on, we fix 0<θ<b_d and consider R∈ with R≥ 100+4θ. Let N_R>0 be given by
N_R=
R^d, d≥ 5
R^4/log R, d=4.
Define
p(R)=1+θ/N_R/V(R).
It suffices to show that p_c(R)≥ p(R) for R large, or equivalently, the percolation does not occur if the Bernoulli parameter p is set to be p(R).
Convention on Functions and Constants. Constants whose value is unimportant and may change from line to line are denoted C, c, c_d, c_1,c_2,….
§ ACKNOWLEDGEMENTS
The author's work was partly supported by Startup Funding XXX. We thank Edwin Perkins for telling us their conjectured constant for d=4.
§ PROOF OF THE LOWER BOUND
§.§ SIR epidemic models
To prove such a lower bound as in Theorem <ref>, we will use the connection between the bond percolation and the discrete-time SIR epidemic model following <cit.> and <cit.>. The SIR epidemic process on ^d_R is defined by recording the status of all the vertices on _R^d: For any time n≥ 0, each vertex x∈^d_R is either infected, susceptible or recovered, the set of which is denoted respectively by η_n, ξ_n, ρ_n. Given the finite initial configurations of infected sites, η_0, and recovered sites, ρ_0, the epidemic evolves as follows: An infected site x∈η_n infects its susceptible neighbor y∈ξ_n, y∼ x with probability p=p(R), where the infections are conditionally independent given the current configuration. Infected sites at time n become recovered at time n+1, and recovered sites will be immune from further infections and stay recovered. Denote by (x,y) the undirected edge between two neighbors x∼ y in _R^d and let E(_R^d) be the set of all such edges. If we assign i.i.d. Bernoulli random variables B(e) with parameter p=p(R) to each edge e∈ E(_R^d), then the above SIR epidemic process can be formulated as
η_n+1= ⋃_x∈η_n{y∈ξ_n: B(x,y)=1}, ρ_n+1=ρ_n∪η_n, ξ_n+1=ξ_n\η_n+1.
To give a more specific description of the above SIR epidemic, we denote by _n=σ(ρ_0, η_k, k≤ n) the σ-field generated by the SIR epidemic. One can easily conclude from (<ref>) that ρ_n ∈_n since ρ_n=ρ_0 ∪η_1∪⋯∪η_n-1 and ξ_n∈_n by ξ_n=(η_n∪ρ_n)^c. Let ∂ C_n to be the set of infection edges given by
∂ C_n:={(x,y)∈ E(_R^d): x∈η_n, y∈ξ_n}.
Then ∂ C_n ∈_n. For each y∈_R^d, define
D_n(y)={x∈η_n: (x,y) ∈∂ C_n}
to be the set of infected sites that will possibly infect y.
It follows from (1.6) of <cit.> that if S={(x_i,y_i): i≤ m} is a set of distinct edges in _R^d and V_0={y_i: i≤ m}, then for any V⊂ V_0,
(η_n+1=V|_n)=∏_y∈ V_0-V (1-p)^|D_n(y)|∏_y∈ V[1-(1-p)^|D_n(y)|] a.s. on {∂ C_n=S}.
The law of the SIR epidemic can be uniquely determined by (<ref>) and the joint law of (η_0,ρ_0). We refer the reader to Sections 1.2 and 2.1 of <cit.> for more details.
Now that the SIR epidemic has been constructed, we may consider its extinction/survival.
We say that a SIR epidemic survives if with positive probability, η_n≠∅ for all n≥ 1; we say the epidemic becomes extinct if with probability one, η_n= ∅ for some finite n≥ 1.
Equivalence between the bond percolation and the SIR epidemic: The connection between the range-R bond percolation and the SIR epidemic can be described as follows: If the epidemic η starting with η_0={0} and ρ_0=∅ survives, then with positive probability, there is an infinite sequence of distinct infected sites {x_k,k≥ 0} with x_0=0 such that x_k∈η_k, x_k is a neighbor of x_k-1, and x_k-1 infects x_k at time k. Hence the edge between x_k-1 and x_k is open for all k≥ 1. This gives that with positive probability, percolation occurs from η_0={0} to infinity in the range-R bond percolation. Conversely, if percolation from {0} to infinity occurs in the percolation model, then an infinite sequence of distinct sites for infection must exist and so the epidemic survives.
The above implies that to prove Theorem <ref>, it suffices to show that the SIR epidemic η starting with η_0={0} and ρ_0=∅ becomes extinct. Throughout the rest of this paper, we will only consider the epidemic with finite initial condition (η_0, ρ_0). For any disjoint finite sets η_0 and ρ_0, one may use (<ref>) with an easy induction to conclude both η_n and ρ_n are finite for all n≥ 0. Hence it follows that
∪_n=0^∞η_n is not a compact set⇔η_n≠∅, ∀ n≥ 0.
To make a summary, it remains to show that the SIR epidemic η starting with η_0={0} and ρ_0=∅ satisfies
with probability one ∪_n=0^∞η_n is a compact set.
We will do this by coupling the SIR epidemic with an appropriate branching envelope.
§.§ A modified branching envelope
First we will couple the epidemic η with a dominating branching random walk Z=(Z_n, n≥ 0) on _R^d. The state space for our branching random walk in this paper is the space of nonnegative point measures on _R^d denoted by M_P(_R^d): for any μ∈ M_P(_R^d), there are some n≥ 0, and a_k≥ 0, x_k∈_R^d, ∀ 1≤ k≤ n such that μ=∑_k=1^n a_k δ_x_k. Write μ(x)=μ({x}). For any function ϕ: ^d_R →, write μ(ϕ)=∑_x∈^d_Rϕ(x) μ(x). Set |μ|=μ(1) to be its total mass.
Totally order the set (0) as {e_1, ⋯, e_V(R)} and then totally order each (0)^n lexicographically. Following Section 2.2 of Frei and Perkins <cit.>, we will use the following labeling system for our particle system:
I=⋃_n=0^∞(0)^n={(α_1, ⋯, α_n): α_i ∈(0), 1≤ i≤ n},
where (0)^0={∅} labels the root index. Let |∅|=0. If α=(α_1, ⋯, α_n), we let |α|=n be the generation of α, and write
α|i=(α_1, ⋯, α_i) for 1≤ i≤ n. Let πα=(α_1, ⋯, α_n-1) be the parent of α and let α∨ e_i=(α_1, ⋯, α_n, e_i) be an offspring of α whose position relative to its parent is e_i. Assign an i.i.d. collection of Bernoulli random variables {B^α: α∈ I, |α|>0} to the edges connecting the locations of α and its parent πα so that the birth in this direction is valid with probability p(R) (and invalid with probability 1-p(R)). For each n≥ 1, write α≈ n iff |α|=n and B^α|i=1 for all 1≤ i≤ n so that such an α labels a particle alive in generation n. For each α∈ I, define its current location by
Y^α=
∑_i=1^|α|α_i, if α≈ |α|,
Δ, otherwise.
Define for any n≥ 0 that
Z_n:=∑_|α|= nδ_Y^α 1(Y^α≠Δ).
Then (Z_n) gives the empirical distribution of a branching random walk where in generation n, each particle gives birth to one offspring to each of its V(R) neighboring positions independently with probability p(R).
Define Z_n(x)=Z_n({x}) for any x∈^d_R. .
For μ,ν∈ M_P(_R^d), we say ν dominates μ if ν(x)≥μ(x) for all x∈_R^d. For any set Y⊂_R^d, by slightly abusing the notation, we write Y:=∑_x∈ Yδ_x so that the set Y naturally defines a point measure Y∈ M_P(^d_R) taking values in {0,1}. In particular we define η_n∈ M_P(^d_R) for each n≥ 0 by letting η_n:=∑_x∈η_nδ_x. The following lemma is from Proposition 2.3 of <cit.> that defines the coupled SIR epidemic (η_n) inductively with the dominating (Z_n).
On a common probability space, we can define a SIR epidemic process η starting from ({0}, ∅), and a branching random walk Z as in (<ref>), such that
η_n(x)≤Z_n(x), ∀ x∈_R^d, n≥ 0.
The above coupling, however, will not give the extinction of the SIR epidemic as the dominating BRW is supercritical and survives with positive probability (recall from (<ref>) that p(R)V(R)=1+θ/N_R>1). So we consider another M_P(_R^d)-valued process {Z_n, n≥ 0} defined inductively by
Z_n=∑_|α|= nδ_Y^α 1(Y^α≠Δ, Y^α∉∪_k=0^n-1S(Z_k)).
In the above, S(μ)=Supp(μ) is the support of measure μ∈ M_P(_R^d). By defining such a process, we obtain the branching random walk where particles never give birth to places that have been colonized before, an analog to the well-known self-avoiding random walk. In what follows, we will call Z “a self-avoiding branching random walk”. Note we still allow two particles to give birth to the same site at the same time. By definition, Z_n dominates Z_n for any n≥ 0. Nevertheless, it is not true that Z_n will dominate the SIR epidemic η_n as there might exist some site x that is visited earlier by Z and later by η. Fortunately, the total colony of Z dominates that of η by the following lemma, so Z suffices for our purposes given (<ref>).
On a common probability space, we can define a SIR epidemic process η starting from ({0}, ∅), and a self-avoiding branching random walk Z as in (<ref>), such that
∪_k=0^nη_k ⊂∪_k=0^n S(Z_k), ∀ n≥ 0.
Let Z be as in (<ref>). We will define the coupled SIR epidemic process (η_n, ρ_n, ξ_n: n≥ 0) inductively on n so that
∪_k=0^jη_k ⊂∪_k=0^j S(Z_k), ∀ j≤ n
holds and (η_n, ρ_n, ξ_n: j≤ n) has the law of a SIR epidemic process with filtration _n:=σ(ρ_0,η_k, k≤ n). First let η_0={0} and ρ_0=∅. Assuming the above holds for some n≥ 0, we will prove the case for n+1. Let Y_n={y∈_R^d: ∃ x∈η_n, (x,y)∈∂ C_n} be the set of potential infected individuals at time n+1. If x∈η_n, then
(<ref>) implies there exists some unique 0≤ k_n^x≤ n such that Z_k_n^x (x)≥ 1 and the set K:={α≈ k_n^x: Y^α=x} is not empty. The uniqueness of k_n^x follows from the definition of Z. Use the total order of I to pick the minimal element in K denoted by α_k_n^x and define
η_n+1 ={y∈ Y_n: ∃ x∈ D_n(y), B^α_k_n^x∨ (y-x)=1},
ξ_n+1=ξ_n-η_n+1, ρ_n+1=ξ_n∪η_n,
where D_n(y) ⊂η_n is as in (<ref>) and is _n-measurable. Intuitively speaking, if k_n^x=n for x∈η_n, then we let x infect its neighboring sites as in a usual SIR epidemic; if k_n^x<n, we use the historical trajectory of {Z_k, k≤ n} to define the sites infected by x at time n+1. In any case, one can check that (<ref>) holds for n+1. Moreover, the infection events {B^α_k_n^x∨ (y-x)=1}
are independent of _n as they never appear before in the definition of η_k for any 1≤ k≤ n, meaning that this is the first time for the SIR epidemic to infect those y∈η_n+1.
To rigorously check the inductive step, we note that for any y∈η_n+1, by definition (<ref>) there exists some x∈ D_n(y)⊂η_n such that B^α_k_n^x∨ (y-x)=1 for some α_k_n^x≈ k_n^x with 0≤ k_n^x≤ n. So we get α_k_n^x∨ (y-x) ≈ k_n^x+1 and
Y^α_k_n^x∨ (y-x)=Y^α_k_n^x+(y-x)=y.
If y∈∪_m=0^k_n^x S(Z_m), then it is immediate that y∈∪_m=0^n+1 S(Z_m); if y∉∪_m=0^k_n^x S(Z_m), then by the definition (<ref>) for Z_k_n^x+1, we get y∈ S(Z_k_n^x+1), giving y∈∪_m=0^n+1 S(Z_m) as well. In any case, we get y∈∪_m=0^n+1 S(Z_m), and so
η_n+1⊂∪_k=0^n+1 S(Z_k).
Together with the inductive assumption (<ref>), we conclude
∪_k=0^jη_k ⊂∪_k=0^j S(Z_k), ∀ j≤ n+1,
as required.
It remains to prove that the above-defined process (η_n) has the law of a SIR epidemic process, which will follow if one shows that it satisfies (<ref>): If S={(x_i,y_i): 1≤ i≤ m} is a set of distinct edges in _R^d and V⊂{y_i: 1≤ i≤ m}=V_2, then
(η_n+1=V|_n)=∏_y∈ V_2-V (1-p)^|D_n(y)|∏_y∈ V [1-(1-p)^|D_n(y)|] a.s. on {∂ C_n=S}.
To see this, by (<ref>), the left-hand side equals
( ∀ y∈ V_2-V, ∀ x∈ D_n(y), B^α_k_n^x∨ (y-x)=0,
and ∀ y∈ V, ∃ x∈ D_n(y), B^α_k_n^x∨ (y-x)=1|_n).
Then (<ref>) follows by the observing that the indexes {α_k_n^x∨ (y-x)} are distinct by the choice of k_n^x, α_k_n^x and that conditioning on _n, {B^α_k_n^x∨ (y-x)} are i.i.d. Bernoulli random variables.
The above lemma implies that it suffices to show the extinction of Z.
Let 4≤ d≤ 6. There exists some constant C(d)>0 such that for any R>C(d), there exists some 1≤ k_0≤ [N_R]+1 such that (|Z_k_0|)≤ 1-(b_d-θ/2∧1/4)<1.
Assuming Proposition <ref>, we will finish the proof of our main result Theorem <ref>.
Consider a branching random walk (γ_n, n≥ 0) on _R^d starting from a single ancestor at the origin. For each n≥ 1, we let all the individuals in γ_n-1 give birth to independent copies of Z_k_0 to obtain γ_n, in which an offspring is suppressed if and only if the particles jump onto a site that has been visited before by another offspring of the same ancestor. There is no such ancestral restriction in the original self-avoiding branching random walk (Z_n), meaning it's more likely that particles in (Z_n) will collide, so one may couple (Z_n) with (γ_n) so that γ_n dominates Z_nk_0 for any n≥ 0. Proposition <ref> implies that (|Z_k_0|) is strictly less than 1 for R large. The classical theory of branching process then tells us that |γ_n|=0 for n large a.s., giving |Z_nk_0|=0 for n large. Hence with probability one, ∪_k=0^∞ S(Z_k) is compact and so is ∪_k=0^∞η_k by Lemma <ref>. It follows from (<ref>) that the SIR epidemic (η_n) becomes extinct, so percolation does not occur a.s.
With some appropriate initial condition Z_0, we do conjecture that X_t^N_R=1/N_RZ_[tN_R] will converge to a super-Brownian motion with drift θ-b_d for all d≥ 4. An ongoing work of the author <cit.> proves such convergence of the SIR epidemic processes. However, there is still a gap between η_n and Z_n: The difference between η_n and Z_n comes from the events when two or more particles attempting to give birth to the same site at the same time. If we let Γ_n(x) denote the number of the failed infections at site x and time n in the SIR epidemic (η_n), meaning that if k infected individuals simultaneously attempt to infect x at time n, then Γ_n(x)=(k-1) ∨ 0. One can show as in the proof of Lemma 8.1 in <cit.> (see also Lemma 9 of <cit.>) that for a branching random walk starting from a single ancestor at the origin, we have
(∑_n=1^N_R∑_x∈^d_RΓ_n(x))=o(1), in d=4,
and
(∑_n=1^N_R∑_x∈^d_RΓ_n(x))=O(1), in d≥ 5.
So one would expect that our lower bound for p_c should be sharp in d=4 while in d≥ 5, the interesting gap between b_d and b_d appears due to (<ref>).
It remains to present a proof of Proposition <ref>, which will take up the rest of the paper. In Section <ref>, by introducing a new labeling system for the branching particle system, we find an appropriate upper bound for (|Z_k|) and finish the proof of Proposition <ref> by assuming a technical lemma (Lemma <ref>) that gives the convergence of the collision term. The proof of Lemma <ref> is then given in Section <ref> by three intermediate lemmas while Sections <ref>, <ref>, <ref> are devoted to the proofs of those three lemmas.
§ MOMENT BOUNDS FOR THE SELF-AVOIDING BRANCHING RANDOM WALK
We will prove Proposition <ref> that gives the first moment bound for the self-avoiding branching random walk Z. To do so, we introduce a new labeling system for our particle system. Let
=⋃_n=0^∞{0}×{1,⋯, V(R)}^n,
where we use 0 to denote the ancestor located at the origin. We collect various notations for the labeling system that will be used frequently below:
* If β=(β_0, β_1, ⋯, β_n)∈, we set |β|=n to be the generation of β.
* Write β|k=(β_0, ⋯, β_k) for each 0≤ k≤ |β|.
* For each |β|=n with some n≥ 1, let πβ=(β_0, β_1, ⋯, β_n-1) be the parent of β and set β∨ i=(β_0, β_1, ⋯, β_n, i) to be the i-th offspring of β for 1≤ i≤ V(R).
* Write β≥γ if β is a descendant of γ and β>γ if it is strict.
* For each β, γ∈, let k_max=max{0≤ k≤ |β|∧ |γ|: β|k=γ|k} and define γ∧β=β|k_max=γ|k_max to be the most recent common ancestor of β and γ.
Let {W^β∨ i, 1≤ i≤ V(R)}_β∈ be a collection of i.i.d. random vectors, each uniformly distributed on (0)^(V(R))={(x_1,⋯, x_V(R)): {x_i} all distinct}. Let {B^β: β∈, |β|>0} be i.i.d. Bernoulli random variables with parameter p(R) indicating whether the birth of the offspring particle β from its parent πβ to is valid. Let {B^β} and {W^β} be mutually independent. Define the above independent collections of random variables on some complete probability space (Ω, , ).
Write β≈ n if |β|=n and B^β|i=1 for all 1≤ i≤ n. The historical path of a particle β alive at time |β| would be Y_k^β= ∑_i=1^|β| 1(i≤ k) W^β|i for k≥ 0, and we denote its current location by
Y^β=
∑_i=1^|β| W^β|i, if β≈ |β|,
Δ, otherwise.
Set
ζ_β^0:=inf{1≤ m≤ |β|: B^β|m=0}∧ (|β|+1),
where by convention inf∅=∞. One can easily check ζ_β^0>|β| iff Y^β≠Δ. For each particle β∈, we denote by _β the σ-field of all the events in the family line of β before time |β|, which is given by
_β=σ{B^β| m, W^β| m: m≤|β|}.
Then Y^β∈_β and ζ_β^0∈_β. For each n≥ 1, define
_n=⋁_|β|≤ n_β
to be the σ-field of all the events before time n.
For any function ϕ, we define
Z_n(ϕ)=∑_|β|=nϕ(Y^β) 1_{ζ_β^0>|β|}
so that (Z_n) gives the same distribution of the branching random walk Z as in (<ref>) where in generation n, each particle gives birth to one offspring to its V(R) neighboring positions independently with probability p(R).
Next, we will use the new labeling system to rewrite the self-avoiding BRW Z from (<ref>). To do so, we set
ζ_β^1=ζ_β^1(Z)=inf{1≤ m≤ |β|: Y^β|m∈ S(∑_k=0^m-1Z_k) or B^β|m=0 }∧ (|β|+1).
In this way, the event ζ_β^1>|β| is equivalent to Y^β≠Δ and Y^β|m∉ S(∑_k=0^m-1Z_k) for all 1≤ m≤ |β|, that is, the particle β is alive at time |β| and it never collides with Z up to time |β|.
So the self-avoiding BRW Z from (<ref>) can be rewritten by
Z_n(ϕ)=∑_|β|=nϕ(Y^β) 1_{ζ_β^1>|β|}.
Since we start with only one ancestor at the origin, one can check that the above definition is well-given and the existence and uniqueness for the law of Z_n also follows.
For any n≥ 0, one can check that
Z_n+1(1)= ∑_|β|=n+1 1_{ζ_β^1>|β|}=∑_|β|=n∑_i=1^V(R) 1_{ζ_β∨ i^1>|β∨ i|}
= ∑_|β|=n 1_{ζ_β^1>|β|}∑_i=1^V(R) B^β∨ i· 1{Y^β+W^β∨ i∉ S(∑_k=0^nZ_k)},
where the last equality uses the definition of ζ_β∨ i^1 and Y^β∨ i=Y^β+W^β∨ i. To simplify notation, we set
R_n=R_n(Z):=S(∑_k=0^nZ_k).
Recall Z_n(1)=∑_|β|=n 1_{ζ_β^1>|β|}. It follows that
Z_n+1(1)-Z_n(1)=[Z_n+1(1)-(1+θ/N_R) Z_n(1)]+θ/N_RZ_n(1)
= ∑_|β|=n 1_{ζ_β^1>|β|}∑_i=1^V(R)[ B^β∨ i1{Y^β+W^β∨ i∉ R_n}- 1+θ/N_R/V(R)]+θ/N_RZ_n(1)
= ∑_|β|=n 1_{ζ_β^1>|β|}∑_i=1^V(R)[ B^β∨ i1{Y^β+W^β∨ i∉ R_n}- B^β∨ i]
+∑_|β|=n 1_{ζ_β^1>|β|}∑_i=1^V(R) [B^β∨ i -p(R)] +θ/N_RZ_n(1).
Now take expectation and use that B^β∨ i with |β|=n is Bernoulli with parameter p(R) independent of _n ∨σ(W^β∨ i) to arrive at
(Z_n+1(1)-Z_n(1))= -p(R)·(∑_|β|=n 1_{ζ_β^1>|β|}∑_i=1^V(R) 1{Y^β+W^β∨ i∈ R_n})
+θ/N_R(Z_n(1)).
For each |β|=n and 1≤ i≤ V(R), we may condition on _n to see
(1_{Y^β+W^β∨ i∈ R_n}|_n)= ∑_k=1^V(R)1_{Y^β+e_k∈ R_n}(1_W^β∨ i=e_k|_n)
= 1/V(R)∑_k=1^V(R)1_{Y^β+e_k∈ R_n}:=1/V(R)ν(β),
where ν(β) counts the number of neighbours of Y^β that lie in R_n. So (<ref>) becomes
(Z_n+1(1)-Z_n(1))= θ/N_R(Z_n(1))- p(R)(∑_|β|=n 1_{ζ_β^1>|β|}∑_i=1^V(R)1/V(R)ν(β))
= θ/N_R(Z_n(1))- p(R)(∑_|β|=n 1_{ζ_β^1>|β|}ν(β)).
Summing the above for 0≤ n≤ [N_R], we get
(Z_[N_R]+1(1))-1 =θ/N_R∑_n=0^[N_R](Z_n(1))-p(R) (∑_n=0^[N_R]∑_|β|=n 1_{ζ_β^1>|β|}ν(β) ).
Next, recall that
R_n=S(∑_k=0^nZ_k)={Y^γ: |γ|≤ n, ζ_γ^1>|γ|}.
One may use the above to rewrite ν(β) (for |β|=n) as
ν(β)=∑_k=1^V(R)1_{Y^β+e_k∈ R_n}=|{Y^γ: |γ|≤ |β|, ζ_γ^1>|γ|, Y^β-Y^γ∈(0)}|.
For each R>0, define the cutoff time {τ_R } to be[The definitions of (<ref>) and (<ref>) are inspired by Section 5 of <cit.> where the two authors claim that “collisions between distant relatives can be ignored”. The difference between ν(β) and ν_τ(β) as above can also be ignored, but since we only need an upper bound for (<ref>), we simply replace ν(β) by ν_τ(β).]
τ_R ∈, τ_R →∞, τ_R/N_R → 0, in d≥ 5;
τ_R=[N_R/log N_R], in d=4,
and set
ν_τ(β)=|{Y^γ: |γ|≤ |β|, ζ_γ^1>|γ|, Y^β-Y^γ∈(0), |γ∧β|> |β|-τ_R }|.
It is clear that ν(β)≥ν_τ(β) and p(R)≥ 1/V(R). Delete the sum for n<τ_R in the second term of (<ref>) to see
(Z_[N_R]+1(1))-1≤ θ/N_R∑_n=0^[N_R](Z_n(1))-( 1/V(R)∑_n=τ_R^[N_R]∑_|β|=n 1_{ζ_β^1>|β|}ν_τ(β) ).
Define
K^1_R:= 1/V(R)∑_n=τ_R^[N_R]∑_|β|=n 1_{ζ_β^1>|β|}ν_τ(β) .
We will show that
lim_R→∞|(K^1_R-b_d/N_R∑_n=0^[N_R]-τ_RZ_n(1))|=0.
Assuming the above lemma, we may finish the proof of Proposition <ref>.
Set _0=1/4∧b_d-θ/2>0. By (<ref>) we have
τ_R/N_R≤1/2∧_0/4b_d e^b_d, if R is large.
Next, Lemma <ref> implies that for R large,
(K^1_R)≥b_d/N_R∑_n=0^[N_R]-τ_R(Z_n(1))-_0/4.
It follows from (<ref>), (<ref>) and the above that
(Z_[N_R]+1(1))-1≤ θ-b_d/N_R∑_n=0^[N_R]-τ_R(Z_n(1))+_0/4+θ/N_R∑_n=[N_R]-τ_R+1^[N_R](Z_n(1))
≤ -2_0/N_R∑_n=0^[N_R]-τ_R(Z_n(1))+_0/4+ b_d /N_Rτ_R e^b_d
≤ -2_0/N_R∑_n=0^[N_R]-τ_R(Z_n(1))+_0/2,
where the second inequality uses b_d-θ≥ 2_0>0 and (Z_n(1))≤(Z_n(1))=(1+θ/N_R)^n ≤ e^θ≤ e^b_d. The last inequality is by (<ref>).
If there exists some 1≤ n≤ [N_R] such that (Z_n(1))≤ 1-_0, then we are done. If (Z_n(1))≥ 1-_0 for all 0≤ n≤ [N_R], then (<ref>) becomes
(Z_[N_R]+1(1))-1≤ -2_0 (1-_0)+_0/2≤ - _0,
where the last inequality uses _0≤ 1/4. So the above implies (Z_[N_R]+1(1))≤ 1- _0 as required. In either case, the conclusion of Proposition <ref> holds. The proof is now complete.
It remains to prove Lemma <ref>.
§ CONVERGENCE OF THE COLLISION TERM
In this section, we will give the proof of Lemma <ref> in three steps. The number ν_τ(β) defined as in (<ref>) counts the number of sites that have been occupied in the neighborhood of Y^β by its close relatives, which might have been visited more than once. However, one would expect such events to be rare. So we define
nbr_β, γ(τ)= 1(|γ|≤ |β|, Y^β-Y^γ∈(0), |γ∧β|> |β|-τ_R),
and set
K^2_R:= 1/V(R)∑_n=τ_R^[N_R]∑_|β|=n 1_{ζ_β^1>|β|}∑_γ 1_ζ_γ^1>|γ|nbr_β, γ(τ).
The lemma below shows that the difference between K_R^1 and K_R^2 can be ignored. The proof is deferred to Section <ref>.
lim_R→∞(|K^1_R-K_R^2|)=0.
The second step is to replace ζ_β^1>|β| and ζ_γ^1>|γ| in K_R^2 by
{ζ_β^1> |β|-τ_R, Y^β≠Δ, ζ_γ^1> |β|-τ_R, Y^γ≠Δ},
and define
K_R^3:= 1/V(R)∑_n=τ_R^[N_R]∑_|β|=n 1_ζ_β^1> |β|-τ_R∑_γ 1_ζ_γ^1> |β|-τ_Rnbr_β, γ(τ).
In the above, Y^β≠Δ and Y^γ≠Δ are implicitly given by Y^β-Y^γ∈(0) in nbr_β, γ(τ). The following result will be proved in Section <ref>.
lim_R→∞(|K^2_R-K_R^3|)=0.
Turning to the third step, we notice that
{ζ_β^1> |β|-τ_R, ζ_γ^1> |β|-τ_R,|γ∧β|> |β|-τ_R }
={ζ_β∧γ^1> |β|-τ_R,|γ∧β|> |β|-τ_R } ={ζ_β^1> |β|-τ_R,|γ∧β|> |β|-τ_R }.
Hence we may rewrite K_R^3 as (recall nbr_β, γ(τ) from (<ref>))
K_R^3= 1/V(R)∑_n=τ_R^[N_R]∑_|β|=n 1_{ζ_β^1> |β|-τ_R}∑_γ 1_|γ|≤ |β| 1_Y^β-Y^γ∈(0) 1_|γ∧β|> |β|-τ_R.
For each β with |β|=n, we define
F(β,n)=N_R/V(R)∑_γ 1_|γ|≤ |β| 1_Y^β-Y^γ∈(0) 1_|γ∧β|> |β|-τ_R
so that
K_R^3= 1/N_R∑_n=τ_R^[N_R]∑_|β|=n 1_{ζ_β^1> n-τ_R} F(β,n).
Let (k)={α: |α|=k, ζ_α^1>k} be the particles alive at time k in Z. Define
{α}_n={β: β≥α, |β|=n, ζ_β^0>|β|}
to be the set of descendants β of α that are alive at time n. Since all the particles β contributing in K_R^3 are descendants of some α in (n-τ_R), we get (<ref>) becomes
K_R^3 = 1/N_R∑_n=τ_R^[N_R]∑_α∈(n-τ_R)∑_β∈{α}_n F(β,n).
Set
_α(n)=∑_β∈{α}_n F(β,n), and b_d^τ_R= (_0(τ_R)),
where 0∈ is the root index located at the origin.
Then
K_R^3 = 1/N_R∑_n=τ_R^[N_R]∑_α∈(n-τ_R)_α(n).
For each τ_R≤ n≤ [N_R] and α∈(n-τ_R), if we condition on _n-τ_R, then by the Markov property and translation invariance, we get
(_α(n)|_n-τ_R)=(_0(τ_R))=b_d^τ_R.
Take expectations in (<ref>) and use the above to arrive at
(K_R^3) = 1/N_R(∑_n=τ_R^[N_R]∑_α∈(n-τ_R) b_d^τ_R)=b_d^τ_R/N_R(∑_n=τ_R^[N_R]∑_|α|=n-τ_R 1(ζ_α^1>n-τ_R))
= b_d^τ_R/N_R(∑_n=τ_R^[N_R]Z_n-τ_R(1))=b_d^τ_R/N_R(∑_n=0^[N_R]-τ_RZ_n(1)).
The final piece is the following result, whose proof will be given in Section <ref>.
lim_R→∞ b_d^τ_R=b_d.
Now we are ready to finish the proof of Lemma <ref>.
By (<ref>), we get
|(K_R^3-b_d/N_R∑_n=0^[N_R]-τ_RZ_n(1))|=|(b_d^τ_R-b_d/N_R∑_n=0^[N_R]-τ_RZ_n(1))|
≤ |b_d^τ_R-b_d| 1/N_R∑_n=0^[N_R]-τ_R(Z_n(1)) ≤ |b_d^τ_R-b_d| e^θ→ 0.
Together with Lemmas <ref> and <ref>, the conclusion follows immediately.
It remain to prove Lemmas <ref>-<ref>.
§ CONVERGENCE OF THE CONSTANT
We first prove Lemma <ref> in this section. Then we will get an explicit expression for b_d as in Theorem <ref>. Recall that b_d^τ_R=(_0(τ_R)) where
_0(τ_R)=∑_β∈{0}_τ_R F(β,τ_R)=N_R/V(R) ∑_β≥ 0 1_|β|=τ_R, ζ_β^0>|β|∑_γ 1_|γ∧β|> 0 1_|γ|≤ |β| 1_Y^β-Y^γ∈(0)
=N_R/V(R) ∑_β> 0∑_γ≥ 01_|β|=τ_R 1_|γ|≤ |β| 1_Y^β-Y^γ∈(0).
In the last equality we have used {ζ_β^0>|β|}={Y^β≠Δ} and {Y^β, Y^γ≠Δ} is implicit in {Y^β-Y^γ∈(0)}.
Notice that Y^β-Y^γ∈(0) implies Y^β≠ Y^γ, so we cannot have γ=β. Since |γ|≤ |β|=τ_R, we must have γ branches off β at time τ_R-k for some 1≤ k≤τ_R, meaning that if we let α=γ∧β, then |α|=τ_R-k for some 1≤ k≤τ_R. Set |γ|=|α|+m for some 0≤ m≤ k. In this way, we get
b_d^τ_R= (_0(τ_R)) =N_R/V(R)(∑_k=1^τ_R∑_α: α≥ 0,
|α|=τ_R-k 1_ζ_α^0>|α|∑_β: β≥α,
|β|=τ_R∑_m=0^k∑_γ: γ≥α,
|γ|=τ_R-k+m 1_Y^β-Y^γ∈(0))
= N_R/V(R)[∑_k=1^τ_R∑_α: α≥ 0,
|α|=τ_R-k 1_ζ_α^0>|α|(∑_β: β≥α,
|β|=τ_R∑_m=0^k∑_γ: γ≥α,
|γ|=τ_R-k+m 1_Y^β-Y^γ∈(0)|_α) ],
where the last equality use ζ_α^0 ∈_α (recall _α from (<ref>)).
Next, for each α, β, γ as in the above summation, we get from (<ref>) that
Y^β-Y^α=W^β|(τ_R-k+1)+⋯+W^β|(τ_R-1)+W^β|τ_R:=U_k^R.
and
Y^γ-Y^α=W^γ|(τ_R-k+1)+⋯+W^γ|(τ_R-k+m-1)+W^γ|(τ_R-k+m):=V_m^R.
Set
W_k+m^R:=U_k^R+V_m^R.
Recall that W^β|i and W^γ|j for each i,j are uniform on (0). Although there is a slight dependence between W^β|(τ_R-k+1) and W^γ|(τ_R-k+1) (they are uniformly distributed over {(x,y)∈(0)^2: x≠ y}), it is clear that the joint distribution of (W^β|(τ_R-k+1),W^γ|(τ_R-k+1)) converges to (U,V) where U,V are independent and uniformly distributed on [-1,1]^d. So W_k+m^R=U_k^R+V_m^R will converge in distribution to U_k+m where U_n=Y_1+⋯+Y_n and Y_i are i.i.d. uniform on [-1,1]^d.
Since both Y^β-Y^α and Y^γ-Y^α are independent of _α, we get on the event {ζ_α^0>|α|} for |α|=τ_R-k,
(∑_β: β≥α,
|β|=τ_R∑_m=0^k∑_γ: γ≥α,
|γ|=τ_R-k+m 1_Y^β-Y^α-(Y^γ-Y^α)∈(0)|_α)
= ∑_β: β≥α,
|β|=τ_R∑_m=0^k∑_γ: γ≥α,
|γ|=τ_R-k+m(U_k^R+V_m^R∈(0)) p(R)^k p(R)^m
= ∑_m=0^k V(R)^k (V(R)-1)^m∧ 1 V(R)^(m-1)^+(W_k+m^R∈(0)) p(R)^k+m.
In the second line, the term p(R)^k p(R)^m gives the probability that Y^β, Y^γ≠Δ on the event {Y^α≠Δ}. The third line follows by counting the number of all possible β,γ as in the summation, where (V(R)-1)^m∧ 1 comes from that γ|(τ_R-k+1) ≠β|(τ_R-k+1).
The remaining sum of α in (<ref>) gives
(∑_k=1^τ_R∑_α: α≥ 0,
|α|=τ_R-k 1_ζ_α^0>|α|)=∑_k=1^τ_R [V(R)p(R)]^τ_R-k.
So we conclude from (<ref>), (<ref>), (<ref>) that
b_d^τ_R=N_R/V(R)∑_k=1^τ_R [p(R)V(R)]^τ_R ∑_m=0^k(W_k+m^R∈(0))
V(R)^(m-1)^+ (V(R)-1)^m∧ 1 p(R)^m.
Clearly we may replace V(R)^(m-1)^+ (V(R)-1)^m∧ 1 by V(R)^m as
1≤V(R)^m/V(R)^(m-1)^+ (V(R)-1)^m∧ 1≤ e^τ_R/(V(R)-1) for all 0≤ m≤ k≤τ_R,
and τ_R/(V(R)-1)≤τ_R/N_R→ 0 by (<ref>). Hence
lim_R→∞ b_d^τ_R=lim_R→∞N_R/V(R)∑_k=1^τ_R [V(R)p(R)]^τ_R∑_m=0^k(W_k+m^R∈(0)) [V(R)p(R)]^m.
Similarly, we may replace [V(R)p(R)]^m and [V(R)p(R)]^τ_R by 1 as
1≤ [V(R)p(R)]^m≤ e^τ_R/N_R for all 0≤ m≤τ_R,
and so
lim_R→∞ b_d^τ_R =lim_R→∞N_R/V(R)∑_k=1^τ_R ∑_m=0^ k(W_k+m^R∈(0)).
To calculate the above limit, we let Y_1^R,Y_2^R, ⋯ be i.i.d uniform on (0). Define S_n^R=Y_1^R+⋯+Y_n^R for each n≥ 1 and set S_n^R=0 for n≤ 0. The following lemma comes from (4) in Section 2 of <cit.> and the concentration inequality by Kesten <cit.>.
There is some constant C>0 independent of R so that
(x+S_n^R∈ [-1,1]^d)≤ C (1+n)^-d/2, ∀ n≥ 0, x∈^d.
Recall W_k+m^R from (<ref>). Apply Lemma <ref> to get
(W_k+m^R∈(0))≤sup_x (x+S_k+m-1^R∈ [-1,1]^d) ≤C/(k+m)^d/2.
When d≥ 5, we get
N_R/V(R)=R^d/(2R+1)^d-1→ 2^-d,
and (<ref>) implies
∑_k=1^τ_R ∑_m=0^ k(W_k+m^R∈(0)) ≤∑_k=1^∞∑_m=0^kC/(k+m)^d/2<∞.
Therefore by applying Dominated Convergence on the right-hand side of (<ref>), we get
lim_R→∞ b_d^τ_R =2^-d∑_k=1^∞∑_m=0^k(U_k+m∈ [-1,1]^d)=b_d.
When d=4, we have V(R)/N_R∼ 2^4 log R, thus giving
lim_R→∞ b_4^τ_R=lim_R→∞1/2^4 log R∑_k=1^τ_R∑_m=0^k(W_k+m^R∈(0)).
By (<ref>), we have
∑_k=1^log R∑_m=0^k(W_k+m^R∈(0)) ≤∑_k=1^log R∑_m=0^kC/(k+m)^2
≤∑_k=1^log RC/k≤ C loglog R=o(log R).
Hence we may delete the sum for 1≤ k≤log R in (<ref>) to get
lim_R→∞ b_4^τ_R=lim_R→∞1/2^4 log R∑_k=log R^τ_R∑_m=0^k(W_k+m^R∈(0)).
For each m≥ 0 and k≥ 1, define
h_R(k,m)=(W_k+m^R∈(0)) and g(k,m)=2^d (2π/3)^-d/2 (k+m)^-d/2.
The following is an application of the classical Central Limit Theorem.
(i) If R→∞ and x_n/n^1/2→ x as n→∞, then for any Borel set with |∂ B| =0 and | B| <∞, we have
n^d/2(x_n+S_n^R∈ B) →| B|· (2πσ^2)^-d/2 e^-| x| ^2/2σ^2,
where σ^2=1/3 is the limit of the variance of one component of Y_1^R as R→∞.
(ii) For any >0 small, if R is large, then
h_R(k,m)/g(k,m)∈ [1-,1+], ∀ k≥log R, m≥ 0.
By Lemma 4.6 of <cit.>, we have (i) holds. For the proof of (ii), we note (i) ensures that
lim_k+m→∞h_R(k,m)/g(k,m)=1.
So if R is large enough, we get k+m≥log R is large and hence (<ref>) follows.
By (<ref>), we may replace h_R(k,m) in (<ref>) by g(k,m) to get
lim_R→∞ b_4^τ_R=lim_R→∞1/2^4 log R∑_k=log R^τ_R∑_m=0^k 2^4 (2π/3)^-2 (k+m)^-2.
For any >0, it is easy to check if k≥log R is large, then
∑_m=0^k1/(k+m)^2∈[(1-) 1/2k, (1+) 1/2k].
So replace ∑_m=0^k(k+m)^-2 in (<ref>) by 1/2k to get
lim_R→∞ b_4^τ_R=lim_R→∞9/4π^2log R∑_k=log R^R^4/log R1/2k=9/2π^2=b_4,
as required.
§ PROOF OF LEMMA <REF>
In this section, we will prove Lemma <ref>. Define for each t≥ 0
I(t)=1+∫_0^t (1+s)^1-d/2 ds.
One can check that there exists some constant C>0 so that
I(N_R)≤
C, ∀ R large in d≥ 5,
C log R, ∀ R large in d=4,
and
N_R I(N_R)≤ CV(R).
Recall K_R^1 from (<ref>) and K_R^2 from (<ref>) to get
|K^1_R-K_R^2|≤1/V(R)∑_n=τ_R^[N_R]∑_|β|=n 1_{ζ_β^0>|β|}|ν_τ(β)-∑_γ 1_ζ_γ^1>|γ|nbr_β, γ(τ)|,
where we have replaced ζ_β^1>|β| by ζ_β^0>|β|.
The absolute value on the right-hand side above arises from the multiple occupancies of particles, that is, ν_τ(β) in K^1_R only counts the number of sites in the neighborhood of Y^β that has been occupied whereas the corresponding term in K^2_R counts the total number of particles that have ever visited that neighborhood. We claim that
|ν_τ(β)-∑_γ 1_ζ_γ^1>|γ|nbr_β, γ(τ)|
≤∑_γ 1_|γ|≤ |β| 1_Y^β-Y^γ∈(0)1_|γ∧β|>|β|-τ_R∑_α≠γ 1_Y^α=Y^γ 1_|α|≤ |β| 1_|α∧β|>|β|-τ_R.
To see this, if there are k≥ 2 particles that have ever visited the site Y^γ in the neighborhood of Y^β, then they will contribute at most k-1 to the left-hand side while at least k(k-1) to the right-hand side, thus giving the above. Note {Y^γ, Y^α≠Δ} is implicit in {Y^α=Y^γ}. It follows that
(|K_R^1-K_R^2|) ≤1/V(R)( ∑_n=τ_R^[N_R]∑_|β|=n 1_{ζ_β^0>|β|}∑_γ 1_|γ|≤ |β| 1_Y^β-Y^γ∈(0)
1_|γ∧β|>|β|-τ_R∑_α≠γ 1_Y^α=Y^γ 1_|α|≤ |β| 1_|α∧β|>|β|-τ_R):=1/V(R)(J_1).
Since α and γ are symmetric, we may assume α∧β≤γ∧β. There two cases for α, β, γ:
(i) α∧β < γ∧β; (ii) α∧β = γ∧β.
Denote by J^(i)_1 (resp. J^(ii)_1) for the contribution to J_1 when α,β, γ satisfy case (i) (resp. case (ii)).
Case (i). Let σ=α∧β and δ=γ∧β. In case (i) we have σ < δ and δ<β by Y^γ≠ Y^β.
For each |β|=n with τ_R≤ n≤ [N_R], we let σ=β|k and δ=β|j for some n-τ_R≤ k<j≤ n-1. Set |α|=k+m for some 0≤ m≤ n-k and |γ|=j+l for some 0≤ l≤ n-j.
See Figure <ref> below for illustration.
The sum of α, β, γ from J^(i)_1 can be written as
(J_1^(i)) =( ∑_n=τ_R^[N_R] ∑_k=n-τ_R^n-1∑_j=k+1^n-1∑_m=0^n-k∑_l=0^n-j∑_σ: |σ|=k∑_δ: |δ|=j,
δ≥σ∑_α: |α|=k+m,
α≥σ∑_γ: |γ|=j+l,
γ≥δ∑_β: |β|=n,
β≥δ
1_{Y^α, Y^β, Y^γ≠Δ} 1_{Y^β-Y^γ∈(0)} 1_{Y^α=Y^γ}).
Recall _α from (<ref>). By conditioning on _α∨_γ, on the event {Y^α, Y^β, Y^γ≠Δ} we get
(Y^β-Y^γ∈(0)|_α∨_γ)
=(Y^β-Y^δ-W^β|(j+1)+(Y^δ+W^β|(j+1)-Y^γ)∈(0)|_α∨_γ).
Note that δ=β|j. Recall (<ref>) to get
Y^β-Y^δ-W^β|(j+1)=∑_t=j+2^n W^β|t,
which is independent of _α∨_γ. Hence we get
(Y^β-Y^γ∈(0)|_α∨_γ) ≤sup_x (∑_t=j+2^n W^β|t+x∈ [-1,1]^d)≤C/(n-j)^d/2,
where the last inequality is by Lemma <ref>.
Next, recall |α∧γ|=|σ|=k with |α|=k+m and |γ|=j+l. Use (<ref>) again to get
Y^α-Y^γ=∑_t=k+1^k+m W^α|t+∑_s=k+1^j+l W^γ|s.
Notice that W^γ|(j+l) is independent of everything else. Use the total probability formula and let W^γ|(j+l)=e_i for 1≤ i≤ V(R) to obtain
(Y^α=Y^γ)=1/V(R)∑_i=1^V(R)(Y^α-Y^γ-W^γ|(j+l)=-e_i)
=1/V(R)(∑_t=k+1^k+m W^α|t+∑_s=k+1^j+l-1 W^γ|s∈(0)) ≤1/V(R)C/(j+l-k+m)^d/2.
Combine (<ref>) and (<ref>) to see (<ref>) becomes
(J_1^(i))≤ ∑_n=τ_R^[N_R]∑_k=n-τ_R^n-1∑_j=k+1^n-1∑_m=0^n-k∑_l=0^n-j∑_σ: |σ|=k∑_δ: |δ|=j,
δ≥σ∑_α: |α|=k+m,
α≥σ∑_γ: |γ|=j+l,
γ≥δ∑_β: |β|=n,
β≥δ
(Y^α, Y^β, Y^γ≠Δ) C/(n-j)^d/21/V(R)C/(l+(j-k)+m)^d/2.
The probability (Y^α, Y^β, Y^γ≠Δ) is bounded by p(R)^k p(R)^j-kp(R)^m p(R)^l p(R)^n-j while the sum of σ, δ, α, γ, β gives V(R)^k V(R)^j-kV(R)^m V(R)^l p(R)^n-j. So the above is at most
(J_1^(i))≤ ∑_n=τ_R^[N_R]∑_k=n-τ_R^n-1∑_j=k+1^n-1∑_m=0^n-k∑_l=0^n-j (V(R)p(R))^k (V(R)p(R))^j-k (V(R)p(R))^m
(V(R)p(R))^l (V(R)p(R))^n-jC/(n-j)^d/21/V(R)C/(l+(j-k)+m)^d/2.
Use k+(j-k)+m+l+(n-j)≤ 3n≤ 3[N_R] and V(R)p(R)≤ e^θ/N_R to get the above can be bounded by
C e^3θ1/V(R)∑_n=τ_R^[N_R]∑_k=n-τ_R^n-1∑_j=k+1^n-1∑_m=0^n-k∑_l=0^n-j1/(n-j)^d/21/(l+(j-k)+m)^d/2.
The sum of l gives at most C/(1+(j-k)+m)^d/2-1 and then the sum of m gives at most
∑_m=0^n-kC/(1+(j-k)+m)^d/2-1≤∑_m=0^N_R1/(1+m)^d/2-1≤ I(N_R).
Next, the sum of j gives
∑_j=k+1^n-11/(n-j)^d/2=∑_j=1^n-k-11/j^d/2≤ C.
Combine the above to see
(J_1^(i))≤ C e^3θ1/V(R)∑_n=τ_R^[N_R]∑_k=n-τ_R^n-1 CI(N_R)≤ C1/V(R)I(N_R) τ_R N_R ≤ Cτ_R,
where the last inequality uses (<ref>).
Case (ii): In this case we have α∧β=γ∧β. Let σ=α∧β.
For each |β|=n with τ_R≤ n≤ [N_R], we let σ=β|k for some n-τ_R≤ k≤ n-1 as we assume |α∧β|≥ |β|-τ_R. Let δ=α∧γ. Then δ≥σ and we set |δ|=|σ|+j=k+j for some 0≤ j≤ n-k. Let |α|=k+j+m and |γ|=k+j+l for some 0≤ m,l≤ n-k-j. See Figure <ref> for the illustration for J^(ii)_1. The sum of α, β, γ in J^(ii)_1 can be written as
(J_1^(ii))= (∑_n=τ_R^[N_R] ∑_k=n-τ_R^n-1∑_j=0^n-k∑_m=0^n-k-j∑_l=0^n-k-j∑_σ: |σ|=k∑_β: |β|=n,
β≥σ∑_δ: |δ|=k+j,
δ≥σ∑_α: |α|=k+j+m,
α≥δ∑_γ: |γ|=k+j+l,
γ≥δ
1_{Y^α, Y^β, Y^γ≠Δ} 1_{Y^β-Y^γ∈(0)} 1_{Y^α=Y^γ}).
Similar to the derivation of (<ref>), one may get that on the event {Y^α, Y^β, Y^γ≠Δ},
(Y^β-Y^γ∈(0)|_α∨_γ)
=(Y^β-Y^σ-W^β|(k+1)+(Y^σ+W^β|(k+1)-Y^γ)∈(0)|_α∨_γ) ≤C/(n-k)^d/2.
Also similar to (<ref>), we obtain
(Y^α=Y^γ) ≤1/V(R)C/(1+l+m)^d/2.
So (<ref>) becomes
(J_1^(ii))≤ ∑_n=τ_R^[N_R]∑_k=n-τ_R^n-1∑_j=0^n-k∑_m=0^n-k-j∑_l=0^n-k-j∑_σ: |σ|=k∑_β: |β|=n,
β≥σ∑_δ: |δ|=k+j,
δ≥σ∑_α: |α|=k+j+m,
α≥δ∑_γ: |γ|=k+j+l,
γ≥δ
(Y^α, Y^β, Y^γ≠Δ) C/(n-k)^d/21/V(R)C/(1+l+m)^d/2
= ∑_n=τ_R^[N_R]∑_k=n-τ_R^n-1∑_j=0^n-k∑_m=0^n-k-j∑_l=0^n-k-j (V(R)p(R))^k (V(R)p(R))^n-k (V(R)p(R))^j
(V(R)p(R))^l (V(R)p(R))^mC/(n-k)^d/21/V(R)C/(1+l+m)^d/2.
Use k+(n-k)+j+m+l≤ 3n≤ 3[N_R] and V(R)p(R)≤ e^θ/N_R to get the above is at most
C e^3θ1/V(R)∑_n=τ_R^[N_R]∑_k=n-τ_R^n-1∑_j=0^n-k∑_m=0^n-k-j∑_l=0^n-k-j1/(n-k)^d/21/(1+l+m)^d/2.
The sum of l is bounded by C/(1+m)^d/2-1 and then the sum of m gives at most CI(N_R). Next, the sum of j gives n-k+1≤ 2(n-k) and we are left with
Ce^3θ1/V(R) I(N_R) ∑_n=τ_R^[N_R]∑_k=n-τ_R^n-11/(n-k)^d/2-1.
The sum of k above is bounded by I(N_R) and the sum of n gives at most N_R. Combine the above to see
(J_1^(ii))≤ C 1/V(R) I(N_R)^2 N_R≤ CI(N_R),
where the last inequality uses (<ref>).
We are ready to finish the proof of Lemma <ref>.
Apply (<ref>), (<ref>), (<ref>) to get
(|K_R^1-K_R^2|) ≤ 1/V(R) [(J_1^(i))+(J_1^(ii))]
≤1/V(R)[Cτ_R+CI(N_R)].
When d≥ 5, we have I(N_R)≤ C and τ_R/R^d → 0 by (<ref>), so
(|K_R^1-K_R^2|) ≤ Cτ_R/R^d+C 1/V(R)→ 0.
When d=4, we get I(N_R)≤ C log R and τ_R=N_R/log N_R ≤ R^4/(log R)^2, so
(|K_R^1-K_R^2|) ≤ C1/(log R)^2+ C log R/R^4→ 0.
The proof is now complete.
§ PROOF OF LEMMA <REF>
The last section is devoted to the proof of Lemma <ref>. Recall K_R^2 from (<ref>) and K_R^3 from (<ref>) to see
( | K_R^2-K_R^3|) ≤1/V(R)(∑_n=τ_R^[N_R]∑_|β|=n∑_γ: |γ|≤ |β| 1_Y^β-Y^γ∈(0) 1_|γ∧β|> |β|-τ_R
×[1_ζ_β^1> |β|-τ_R 1_ζ_γ^1> |β|-τ_R-1_ζ_β^1> |β| 1_ζ_γ^1> |γ|]).
Since |γ|≤ |β|, we get 1_ζ_γ^1> |β|-τ_R≤ 1_ζ_γ^1> |γ|-τ_R. Bound the above square bracket by
1_{ |β|-τ_R < ζ_β^1≤ |β|} +1_{ |γ|-τ_R< ζ_γ^1 ≤ |γ|}.
Next, to get symmetry between β, γ, we let α=β∧γ. Notice that in the above sum, α, β, γ satisfy τ_R≤ |β|≤ [N_R], |γ|≤ |β| and |α| ≥ |β|-τ_R. One can easily deduce that 0≤ |α|≤ [N_R], β≥α, γ≥α, |β|≤ |α|+τ_R and |γ|≤ |α|+τ_R. Hence we may bound (<ref>) by
( | K_R^2-K_R^3|) ≤1/V(R)(∑_α: 0≤ |α|≤ [N_R]∑_β: β≥α,
|β|≤ |α|+τ_R∑_γ: γ≥α,
|γ|≤ |α|+τ_R 1_Y^β-Y^γ∈(0)
×[1_{ |β|-τ_R < ζ_β^1≤ |β|} +1_{ |γ|-τ_R< ζ_γ^1 ≤ |γ|}]).
Set
I_0(R):=1/V(R)(∑_α: 0≤ |α|≤ [N_R]∑_β: β≥α,
|β|≤ |α|+τ_R∑_γ: γ≥α,
|γ|≤ |α|+τ_R 1_Y^β-Y^γ∈(0) 1_{ |β|-τ_R < ζ_β^1≤ |β|}).
By symmetry between β and γ, one can check that (<ref>) implies
( | K_R^2-K_R^3|) ≤ 2 I_0(R).
It suffices to show that I_0^R → 0 as R→∞.
Recall the definition of ζ_β^1 from (<ref>).
In order that |β|-τ_R < ζ_β^1≤ |β| occurs on the event {Y^β≠Δ}, there has to be some (|β|-τ_R)^+ < i≤|β| so that
Y^β| i∈ S(∑_k=0^i-1Z_k),
which means that at time i, the particle β| i is sent to a place that has been visited by some particle δ before time i. Hence we may bound 1_{ |β|-τ_R < ζ_β^1≤ |β|} by
∑_i=(|β|-τ_R)^++1^|β|∑_δ 1_|δ|≤ i-1 1_Y^β| i=Y^δ.
Apply the above in (<ref>) to see
I_0^R≤1/V(R)(∑_α: 0≤ |α|≤ [N_R] ∑_β: β≥α,
|β|≤ |α|+τ_R∑_γ: γ≥α,
|γ|≤ |α|+τ_R 1_Y^β-Y^γ∈(0)
∑_i=(|β|-τ_R)^++1^|β|∑_δ 1_|δ|≤ i-1 1_Y^β| i=Y^δ).
Let I_1^R and I_2^R be respectively the sum of i≤ |α| and i>|α| on the right-hand side term above, that is, we define
I_1^R:= 1/V(R)(∑_α: 0≤ |α|≤ [N_R] ∑_β: β≥α,
|β|≤ |α|+τ_R∑_γ: γ≥α,
|γ|≤ |α|+τ_R 1_Y^β-Y^γ∈(0)
∑_i=(|β|-τ_R)^++1^|α|∑_δ 1_|δ|≤ i-1 1_Y^β| i=Y^δ)
and
I_2^R:= 1/V(R)(∑_α: 0≤ |α|≤ [N_R] ∑_β: β≥α,
|β|≤ |α|+τ_R∑_γ: γ≥α,
|γ|≤ |α|+τ_R 1_Y^β-Y^γ∈(0)
∑_i=|α|+1^|β|∑_δ 1_|δ|≤ i-1 1_Y^β| i=Y^δ).
Then I_0^R≤ I_1^R+I_2^R and it suffices to show that
lim_R→∞ I_1^R=0 and lim_R→∞ I_2^R=0.
§.§ Convergence of I_1^R
To calculate I_1^R, we set |α|=k for some 0≤ k≤ [N_R]. Let |β|=k+l and |γ|=k+m for some 0≤ l,m≤τ_R. Noticing that i≤ |α|, we get β|i=α|i. Next, since |δ|≤ i-1≤ |α|-1, we must have δ branches off the family tree of α,β,γ before time |α|=k. For any (k+l-τ_R)^++1≤ i≤ k, set |δ∧α|=j for some 0≤ j≤ i-1 and let |δ|=j+n for some 0≤ n≤ i-1-j. The above case is similar to J_1^(i) as in Figure <ref> except for the notation.
Now write the sum in I_1^R as
I_1^R= 1/V(R)( ∑_k=0^[N_R]∑_l=0^τ_R∑_m=0^τ_R∑_i=(k+l-τ_R)^++1^k∑_j=0^i-1∑_n=0^i-1-j∑_α: |α|=k∑_β: β≥α,
|β|=k+l∑_γ: γ≥α,
|γ|=k+m∑_δ: δ≥α|j
|α|=j+n
1_{Y^β, Y^γ, Y^δ≠Δ} 1_Y^β-Y^γ∈(0) 1_Y^α| i=Y^δ).
Recall from (<ref>) to see
Y^β-Y^γ=∑_t=k+1^k+m W^β|t+∑_s=k+1^k+l W^γ|s.
The above is independent of _α∨_δ. Hence by conditioning on _α∨_δ, on the event {Y^β, Y^γ, Y^δ≠Δ} we get
(Y^β-Y^γ∈(0)|_α∨_δ)≤C/(1+l+m)^d/2,
where the last inequality is by Lemma <ref>.
Next, notice that α∧δ=α|j with |δ|=j+n. Use (<ref>) again to get
Y^α|i-Y^δ=∑_t=j+1^i W^α|t+∑_s=j+1^j+n W^δ|s.
Similar to (<ref>), we let W^δ|(j+n)=e_i for some 1≤ i≤ V(R) and use (<ref>) to obtain
(Y^α|i=Y^δ) = 1/V(R)(∑_t=j+1^i W^α|t+∑_s=j+1^j+n-1 W^δ|s∈(0))
≤1/V(R)C/(1+(i-1-j)+n)^d/2.
Now we are left with
I_1^R≤1/V(R) ∑_k=0^[N_R]∑_l=0^τ_R∑_m=0^τ_R∑_i=(k+l-τ_R)^++1^k∑_j=0^i-1∑_n=0^i-1-j∑_α: |α|=k∑_δ: δ≥α|j
|α|=j+n∑_β: β≥α,
|β|=k+l∑_γ: γ≥α,
|γ|=k+m
(Y^β, Y^γ, Y^δ≠Δ) C/(1+l+m)^d/21/V(R)C/(1+(i-1-j)+n)^d/2.
The probability of {Y^β, Y^γ, Y^δ≠Δ} gives p(R)^k p(R)^l p(R)^m p(R)^n while the sum of α, β,γ,δ gives V(R)^k V(R)^l V(R)^m V(R)^n. So the above is at most
I_1^R≤C/V(R)^2 ∑_k=0^[N_R]∑_l=0^τ_R∑_m=0^τ_R∑_i=(k+l-τ_R)^++1^k∑_j=0^i-1∑_n=0^i-1-j
(V(R)p(R))^k+l+m+n1/(1+l+m)^d/21/(1+(i-1-j)+n)^d/2 .
Use k+l+m+n≤ 4[N_R] and V(R)p(R)≤ e^θ/N_R to get (V(R)p(R))^k+l+m+n≤ e^4θ. Next, the sum of n gives
∑_n=0^i-1-j1/(1+(i-1-j)+n)^d/2≤C/(1+(i-1-j))^d/2-1.
The sum of j gives
∑_j=0^i-1C/(1+(i-1-j))^d/2-1≤ CI(i-1)≤ CI(N_R).
The sum of i is bounded by τ_R-l≤τ_R. The sum of m gives at most C/(1+l)^d/2-1 and the sum of l gives CI(τ_R)≤ CI(N_R). Finally the sum of k gives [N_R]+1≤ 2N_R. Combine the above to conclude
I_1^R≤C/V(R)^2 e^4θ× CI(N_R) ×τ_R × CI(N_R) × 2N_R≤ CN_Rτ_R/V(R)^2 I(N_R)^2.
When d≥ 5, use I(N_R)≤ C and N_R=R^d to get I_1^R≤ Cτ_R/R^d→ 0 by (<ref>). When d=4, we get
I_1^R≤ CR^4/log RR^4/log R/log R/(R^4)^2 (Clog R)^2≤C/log R→ 0.
§.§ Convergence of I_2^R
Turning to I_2^R, again we let |α|=k for some 0≤ k≤ [N_R]. Let |β|=k+l and |γ|=k+m for some 0≤ l,m≤τ_R. Then we write
I_2^R= 1/V(R)(∑_k=0^[N_R]∑_l=0^τ_R∑_m=0^τ_R∑_i=k+1^k+l∑_α: |α|=k∑_β: β≥α,
|β|=k+l∑_γ: γ≥α,
|γ|=k+m 1_Y^β-Y^γ∈(0)∑_δ 1_|δ|≤ i-1 1_Y^β| i=Y^δ).
Since |δ|≤ i-1<|β|, there are two cases for the generation when δ branches off the family tree of β,γ:
(1) |γ∧δ|≤|β∧δ| ; (2) |γ∧δ| >|β∧δ| .
Let I_2^(1, R) (resp. I_2^(2, R)) denote the contribution to I_2^R from case (1) (resp. case (2)). Roughly speaking, case (1) gives that δ branches off the family tree of β,γ through the β line while case (2) is through the γ line after α=β∧γ. See Figure <ref> below.
(1) Case for I_2^(1,R). Since |δ|≤ i-1, we let β∧δ=β|j for some 0≤ j≤ i-1. Set |δ|=j+n for some 0≤ n≤ i-1-j. Now write the sum in I_2^(1,R) as
I_2^(1,R)= 1/V(R)( ∑_k=0^[N_R]∑_l=0^τ_R∑_m=0^τ_R∑_i=k+1^k+l∑_j=0^i-1∑_n=0^i-1-j∑_α: |α|=k∑_β: β≥α,
|β|=k+l∑_γ: γ≥α,
|γ|=k+m∑_δ: δ≥β|j,
|δ|=j+n
1_{Y^β, Y^γ, Y^δ≠Δ} 1_Y^β-Y^γ∈(0) 1_Y^β| i=Y^δ).
Recall from (<ref>) to see
(Y^β-Y^β|i)-(Y^γ-Y^γ|(k+1))=∑_t=i+1^k+l W^β|t+∑_s=k+2^k+m W^γ|s,
The above are independent of _β|i∨_δ. Hence by conditioning on _β|i∨_δ, we use (<ref>) to get on the event {Y^β, Y^γ, Y^δ≠Δ},
(Y^β-Y^γ∈(0)|_β|i∨_δ) ≤sup_x(∑_t=i+1^k+l W^β|t+∑_s=k+2^k+m W^γ|s+x ∈(0))
≤C/(1+(k+l-i)+m)^d/2.
where the last inequality is by Lemma <ref>.
Next, notice that β∧δ=β|j with |δ|=j+n. Use (<ref>) again to get
Y^β|i-Y^δ=∑_t=j+1^i W^β|t+∑_s=j+1^j+n W^δ|s.
Similar to (<ref>), we use (<ref>) to obtain
(Y^β|i=Y^δ) = 1/V(R)(∑_t=j+1^i W^β|t+∑_s=j+1^j+n W^δ|s∈(0))
≤1/V(R)C/(1+(i-1-j)+n)^d/2.
Now we are left with
I_2^(1,R) ≤1/V(R)∑_k=0^[N_R]∑_l=0^τ_R∑_m=0^τ_R∑_i=k+1^k+l∑_j=0^i-1∑_n=0^i-1-j∑_α: |α|=k∑_β: β≥α,
|β|=k+l∑_γ: γ≥α,
|γ|=k+m∑_δ: δ≥β|j,
|δ|=j+n
(Y^β, Y^γ, Y^δ≠Δ)C/(1+(l+k-i)+m)^d/21/V(R)C/(1+(i-1-j)+n)^d/2.
The probability (Y^β, Y^γ, Y^δ≠Δ) is bounded by p(R)^k p(R)^l p(R)^m p(R)^n while the sum of α, β,γ,δ gives V(R)^k V(R)^l V(R)^m V(R)^n. So the above is at most
Ce^4θ/V(R)^2 ∑_k=0^[N_R]∑_l=0^τ_R∑_m=0^τ_R∑_i=k+1^k+l∑_j=0^i-1∑_n=0^i-1-j1/(1+(l+k-i)+m)^d/21/(1+(i-1-j)+n)^d/2.
where we have used (V(R)p(R))^k+l+m+n≤ e^4θ as before. The sum of n gives C/(1+(i-1-j))^d/2-1 and the sum of j gives
∑_j=0^i-1C/(1+(i-1-j))^d/2-1≤ CI(i-1)≤ CI(N_R).
The sum of i is equal to
∑_i=k+1^k+l1/(1+(l+k-i)+m)^d/2≤C/(1+m)^d/2-1.
Then the sum of m is at most CI(τ_R)≤ CI(N_R). The sum of l gives 1+τ_R≤ 2τ_R and the sum of k is equal to [N_R]+1≤ 2N_R. Combine the above to see
I_2^(1,R)≤Ce^4θ/V(R)^2× CI(N_R) × CI(N_R) × 2τ_R× 2N_R≤ CN_Rτ_R/V(R)^2 I(N_R)^2.
The right-hand side above is identical to that in (<ref>) and so I_2^(1,R)→ 0 as R→∞.
(2) Case for I_2^(2,R). Now that |δ∧γ|>|δ∧β|, we must have δ branches off the family tree of β, γ from γ after γ∧β. That is, we let δ∧γ=γ|(k+j) for some 1≤ j≤ m∧ (i-1-k) and |δ|=k+j+n for some 0≤ n≤ i-1-k-j. See Figure <ref> for illustration. Now write the sum in I_2^(2,R) as
I_2^(2,R)= 1/V(R)( ∑_k=0^[N_R]∑_l=0^τ_R∑_m=0^τ_R∑_i=k+1^k+l∑_j=1^m∧ (i-1-k)∑_n=0^i-1-k-j∑_α: |α|=k∑_β: β≥α,
|β|=k+l∑_γ: γ≥α,
|γ|=k+m∑_δ: δ≥γ|(k+j),
|δ|=k+j+n
1_{Y^β, Y^γ, Y^δ≠Δ} 1_Y^β-Y^γ∈(0) 1_Y^β| i=Y^δ).
Recall from (<ref>) to see
Y^β-Y^β|i-(Y^γ-Y^γ|(k+j+1))=∑_t=i+1^k+l W^β|t+∑_s=k+j+2^k+m W^γ|s,
The above are independent of _β|i∨_δ. Hence by conditioning on _β|i∨_δ, on the event {Y^β, Y^γ, Y^δ≠Δ} we get
(Y^β-Y^γ∈(0)|_β|i∨_δ) ≤sup_x(∑_t=i+1^k+l W^β|t+∑_s=k+j+2^k+m W^γ|s+x ∈(0))
≤C/(1+(l+k-i)+(m-j))^d/2.
where the last inequality is by Lemma <ref>.
Next, notice that β∧δ=β|j with |δ|=j+n. Use (<ref>) again to get
Y^β|i-Y^δ=∑_t=k+1^i W^β|t+∑_s=k+1^k+j+n W^δ|s.
Similar to (<ref>), we use (<ref>) to obtain
(Y^β|i=Y^δ)= 1/V(R)(∑_t=k+1^i W^β|t+∑_s=k+1^k+j+n W^δ|s∈(0))
≤1/V(R)C/(1+(i-1-k)+j+n)^d/2≤1/V(R)C/(1+j+n)^d/2.
Now we are left with
I_2^(2,R)≤ 1/V(R)∑_k=0^[N_R]∑_l=0^τ_R∑_m=0^τ_R∑_i=k+1^k+l∑_j=1^m∑_n=0^i-1-k-j∑_α: |α|=k∑_β: β≥α,
|β|=k+l∑_γ: γ≥α,
|γ|=k+m∑_δ: δ≥γ|(k+j),
|δ|=k+j+n
(Y^β, Y^γ, Y^δ≠Δ) C/(1+(l+k-i)+(m-j))^d/21/V(R)C/(1+j+n)^d/2,
where we have replaced j≤ m∧ (i-1-k) by j≤ m.
The probability (Y^β, Y^γ, Y^δ≠Δ) is bounded by p(R)^k p(R)^l p(R)^m p(R)^n while the sum of α, β,γ,δ gives V(R)^k V(R)^l V(R)^m V(R)^n. So the above is at most
I_2^(2,R)≤Ce^4θ/V(R)^2 ∑_k=0^[N_R]∑_l=0^τ_R∑_m=0^τ_R∑_i=k+1^k+l∑_j=1^m ∑_n=0^i-1-k-j
1/(1+(l+k-i)+(m-j))^d/21/(1+j+n)^d/2.
where we have used (V(R)p(R))^k+l+m+n≤ e^4θ as before. The sum of n gives C/(1+j)^d/2-1 and the sum of i gives
∑_i=k+1^k+l1/(1+(l+k-i)^++(m-j))^d/2≤C/(1+(m-j))^d/2-1.
We are arriving at
I_2^(2,R)≤Ce^4θ/V(R)^2 ∑_k=0^[N_R]∑_l=0^τ_R∑_m=0^τ_R∑_j=1^m 1/(1+(m-j))^d/2-11/(1+j)^d/2-1.
Interchange the sum of m,j gives
∑_j=1^τ_R1/(1+j)^d/2-1∑_m=j^τ_R1/(1+(m-j))^d/2-1
≤∑_j=1^τ_R1/(1+j)^d/2-1 I(τ_R)≤ I(τ_R)^2≤ I(N_R)^2.
Finally, we get
I_2^(2,R) ≤Ce^4θ/V(R)^2∑_k=0^[N_R]∑_l=0^τ_R I(N_R)^2 ≤ CN_Rτ_R/V(R)^2 I(N_R)^2.
The right-hand side above is identical to that in (<ref>) and so I_2^(2,R)→ 0 as R→∞.
plain
'
10
BDS89
M. Bramson, R. Durrett, and G. Swindle.
Statistical Mechanics of Crabgrass.
Ann. Probab., 17, no. 2, 444-481, (1989).
DP99
R. Durrett and E. Perkins.
Rescaled contact processes converge to super-Brownian motion in two or more dimensions.
Prob. Th. Rel. Fields, 114, 309–399, (1999).
FP16
S. Frei and E. Perkins.
A lower bound for p_c in range-R bond percolation in two and three dimensions.
Electron. J. Probab., 21: no. 56, 1–22, (2016).
HS90
T. Hara and G. Slade.
Mean-Field Critical Behaviour for Percolation in High Dimensions.
Commun. Math. Phys., 128, (1990), 333–391.
HS05
R. van der Hofstad and A. Sakai.
Critical points for spread-out self-avoiding walk, percolation, and the contact process above the upper critical dimensions.
Prob. Th. Rel. Fields 132: 438-470, (2005).
Hong21
J. Hong.
An upper bound for p_c in range-R bond percolation in two and three dimensions.
Math ArXiv, 2107.14173, (2021). To appear in Annales de l'Institut Henri Poincaré, Probabilités et Statistiques.
Hong24
J. Hong.
Rescaled SIR epidemic processes converge to super-Brownian motion in four or more dimensions.
In preparation.
Kes69
H. Kesten.
A sharper form of the Doeblin-Lévy-Kolmogorov-Rogozin inequality for concentration functions.
Math. Scand., 25, 133–144, (1969).
Kes90
H. Kesten.
Asymptotics in high dimensions for percolation.
Disorder in physical systems: A volume
in honor of John Hammersley (ed. G. Grimmett and D. J. A. Welsh). Clarendon Press, Oxford, (1990).
LZ10
S. Lalley and X. Zheng.
Spatial epidemics and local times for critical branching random walks
in dimensions 2 and 3.
Prob. Th. Rel. Fields, 148 (3–4), (2010), 527–566.
Pen93
M. D. Penrose.
On the spread-out limit for bond and continuum percolation.
Ann. Appl. Probab., 3 (1): 253–276, 1993.
|
http://arxiv.org/abs/2307.01488v1
|
20230704054131
|
SCAT: Robust Self-supervised Contrastive Learning via Adversarial Training for Text Classification
|
[
"Junjie Wu",
"Dit-Yan Yeung"
] |
cs.CL
|
[
"cs.CL"
] |
Semantic Segmentation on 3D Point Clouds with High Density Variations
[
=====================================================================
Despite their promising performance across various natural language processing (NLP) tasks, current NLP systems are vulnerable to textual adversarial attacks. To defend against these attacks, most existing methods apply adversarial training by incorporating adversarial examples. However, these methods have to rely on ground-truth labels to generate adversarial examples, rendering it impractical for large-scale model pre-training which is commonly used nowadays for NLP and many other tasks. In this paper, we propose a novel learning framework called SCAT (Self-supervised Contrastive Learning via Adversarial Training), which can learn robust representations without requiring labeled data.
Specifically, SCAT modifies random augmentations of the data in a fully label-free manner to generate adversarial examples.
Adversarial training is achieved by minimizing the contrastive loss between the augmentations and their adversarial counterparts. We evaluate SCAT on two text classification datasets using two state-of-the-art attack schemes proposed recently. Our results show that SCAT can not only train robust language models from scratch, but it can also significantly improve the robustness of existing pre-trained language models. Moreover, to demonstrate its flexibility, we show that SCAT can also be combined with supervised adversarial training to further enhance model robustness.
§ INTRODUCTION
Deep learning models, especially Pre-trained Language Models <cit.>, have achieved great success in the NLP field. However, a critical problem revealed by recent work is that these models are vulnerable to small adversarial perturbations <cit.>. Attackers can simply fool elaborately fine-tuned models by substituting a few tokens in the input text or adding perturbations in the embedding space. Hence, there exist urgent needs for enhancing the robustness of current language models.
Existing defense schemes in the text domain can be categorized into three main groups. The first one is adversarial training <cit.>, which has been widely used in computer vision (CV) tasks <cit.>. However, these methods typically fail to guard textual attacks due to the discrete nature of text data. To solve this problem, the second group of methods apply discrete adversarial examples instead to perform adversarial training <cit.>. Lastly, some carefully designed models are proven to be effective when defending against specific attacks <cit.>.
However, most of these methods need ground-truth labels when generating adversarial examples, making them unsuitable for large-scale model pre-training which is a popular scheme these days.
Also, specially designed models like those proposed by <cit.> could not generalize well to state-of-the-art attacks <cit.>.
Recently,
contrastive learning has become a popular paradigm to obtain high-quality representations for different downstream tasks <cit.>. Motivated
by its great success, some CV
researchers proposed “label-free” adversarial pre-training methods to enhance model robustness <cit.>. However, these
frameworks rely on gradient-based attacks on continuous pixels and thus cannot be transferred in a straightforward way to NLP tasks.
To tackle the above problems, in this paper, we propose a novel learning framework named SCAT (Self-supervised Contrastive Learning via Adversarial Training). More concretely, we first apply a naive yet useful data augmentation method <cit.> to obtain different views of the original input. We then create adversarial examples [We term adversarial examples used in SCAT pre-training as “label-free adversarial examples” in the rest of the paper, in order to distinguish them from those generated by the two attackers we use in experiments.]
by replacing tokens in the augmentations via a masked language model. Adversarial training
is performed by minimizing the contrastive loss between the augmentations and their adversarial counterparts. It is noteworthy that no ground-truth labels are needed in the above process.
We evaluate the effectiveness of SCAT on two different text classification datasets using two state-of-the-art textual attackers. Our results show that SCAT can not only pre-train robust models from scratch, but it can also significantly enhance the robustness of existing pre-trained language models such as BERT <cit.>. Further performance gain in terms of robustness obtained when combining SCAT with supervised adversarial training also demonstrates the flexibility of SCAT.
The contributions of this paper are as follows:
* We propose an efficient token substitution-based strategy to create adversarial examples for textual inputs in a fully label-free manner.
* To the best of our knowledge, our proposed SCAT framework is the first attempt to perform textual adversarial training via contrastive learning without using labeled data.
* Extensive experimental results demonstrate that SCAT can endow different models a significant boost of robustness against strong attacks in flexible ways.
§ RELATED WORK
Textual Adversarial Attacks. Compared to the success of adversarial attacks in CV, attacking textual data is more challenging due to its discrete nature. Some early works <cit.> adapt gradient-based attacks to text data by introducing perturbations in the high-dimensional embedding spaces directly. However, these attacks lack semantic consistency since there do not exist obvious relationships between words and their embedding values. To make the generated adversarial examples fluent and similar to their original counterparts, recent NLP attacks focus on designing heuristic rules to edit characters <cit.>, words <cit.> or sentence segments <cit.> in the input sequence and achieve great performance on different benchmarks.
Defense Schemes in NLP. Various defense methods have been designed for NLP systems.
One of them is adversarial training, which has been a prevailing way to improve model robustness in CV <cit.>. To fill the gap between discrete text and continuous pixels when transferring this approach to NLP, researchers either attempt to add continuous perturbations to high-dimensional spaces like word embedding <cit.>, or generate discrete adversarial examples first and then perform adversarial training <cit.>. Several elaborately designed models
have also been proposed to defend specific attacks such as character-level typos <cit.> and token-level substitutions <cit.>. More recently, certified-robustness <cit.> has attracted much attention since it ensures the failure of adversarial attacks to some extent via the interval bound propagation method. Nevertheless, these methods still have much room for improvement. As pointed out by <cit.>, gradient-based adversarial training is not suitable for textual data. Also, most of these defense methods need to touch ground-truth labels, but label scarcity is a common problem in many machine learning tasks due to the costly human annotation procedure. Hence, it is crucial to design label-free defenses for NLP systems.
Contrastive Learning and Adversarial Robustness. Self-supervised learning, especially contrastive learning, is becoming popular since it can generate high-quality representations over different modalities without using class labels <cit.>. Some recent studies demonstrated that unlabeled data could be used to generate robust representations for images <cit.>. Later on, following the SimCLR framework <cit.>, <cit.> and <cit.> proposed label-free approaches to craft gradient-based adversarial examples for adversarial contrastive learning. However, to the best of our knowledge, no efforts in the NLP fields have been made in this direction, mainly due to the subtle differences between vision and language. Contrariwise, our proposed SCAT framework solves the above issues in a general and flexible paradigm
and can defend various textual attacks effectively.
§ METHODOLOGY
§.§ Preliminaries
Self-supervised Contrastive Learning. Introduced by <cit.>, SimCLR can achieve similar performance compared to models trained for image classification tasks in a supervised manner. The workflow of SimCLR can be briefly explained as follows: considering a randomly sampled minibatch {x_1, x_2, …, x_N}. For each example x_i, a data augmentation module is applied to yield two correlated views denoted by x_i^(1) and x_i^(2). Let B denote the set of 2N augmented examples thus formed from the original minibatch. For each augmented example x_i^(1)∈ B, it forms a positive pair with x_i^(2) and 2(N-1) negative pairs with the other augmented examples in the set B\{x_i^(1), x_i^(2)}. Let the encoder and projection head be f(·) and g(·), respectively. They map x_i^(1) and x_i^(2) to hidden representations z_i^(1) and z_i^(2) as z=g(f(x)). We can thus define the contrastive loss function for a positive pair (x_i^(1), x_i^(2)) as:
l_(x_i^(1), x_i^(2)) = -log exp( sim(z_i^(1), z_i^(2))/τ)/∑_x_j∈ B\{x_i^(1)} exp( sim(z_i^(1), z_j)/τ)
L_CL_(x_i^(1), x_i^(2)) = l_(x_i^(1), x_i^(2)) + l_(x_i^(2), x_i^(1))
where
z_j = g(f(x_j)). sim(x,y) denotes the cosine similarity between two vectors, and τ is a temperature parameter. We define { x_i_pos} as the positive set including the augmentations of x_i and { x_i_neg} the negative set containing the augmentations of other instances.
Adversarial Training. Adversarial training is a prevalent method for enhancing model robustness. The key idea behind it is solving a min-max optimization problem. Details about adversarial training's key idea can be found in Appendix <ref>. Overall, our model is built upon
these preliminaries. More details will be described below.
§.§ Problem Definition and Model Overview
Formally, given a set of data {(x_i,y_i)}_i=1^N and a text classification model M:X→ Y,
adversarial attackers aim to make M give wrong predictions with slight modifications on x_i. In this paper, we want to build a robust model M_R that can still generate correct labels against such perturbations in a fully label-free manner.
fig:model overview illustrates the workflow of our proposed method.
Given an input sentence, SCAT first randomly generates two augmented examples via a data augmentation module (data augmentation). Then, the Adv-Generator will craft a label-free adversarial example by substituting a few tokens in one of the augmentations. This step utilizes the gradient information provided by the contrastive loss between the two augmentations. Next, we add this label-free adversarial example to the original positive set as a regularizer to help our model defend malevolent attacks. Finally, we perform adversarial training
by minimizing the contrastive loss between the two augmentations and the adversarial counterpart.
Label-free Adversarial Example Generation
linenosize=
§.§ Data Augmentation
As pointed out by <cit.>, the selection of a data augmentation strategy is crucial to the performance of contrastive learning.
After considering several augmentation methods such as synonym replacement, inspired by <cit.>, we apply a simple yet useful random token substitution-based method to augment the original examples (see Appendix <ref> for details of our selection).
More concretely, we first build a vocabulary[Details are described in Appendix <ref>.] V for the given training corpus, then apply an operation T to transform each token w_i in the input sentence x_i = (w_1, w_2, …, w_k) to T(w_i):
T(w_i)={ w_i
w^'_i ,
.
where w^'_i is a random token extracted from V and p ∈ [0,1] controls the diversity of the augmentations. Although this method cannot ensure the fluency of the generated sequence T(x_i), it enables our model to learn adequate instance-wise features from the raw text and hence is useful for generating high-quality representations.
§.§ Label-Free Adversarial Example Generator (Adv-Generator)
Since SCAT needs to generate adversarial examples
without using ground-truth labels, conventional state-of-the-art attackers cannot be applied here directly. Instead, we
design a method to generate label-free adversarial examples for pre-training efficiently.
The whole generation process is summarized in Algorithm <ref>, which can be divided into two main steps to be elaborated below.
Determine Attack Positions (line 1-8). Given two augmentations of an input sentence x_i, similar to <cit.>, we build x_i's label-free adversarial counterpart x_i^adv upon the copy of one of the augmented examples T(x_i)_1, named T(x_i)^c_1.
However, without gold labels, we could neither decide the attack positions via calculating the token importance scores like <cit.>, nor end the attack until the model's prediction has been changed. To determine the attack positions, we adopt the following steps: for each token w_j in T(x_i)^c_1, let ∇_w_jL be the gradient of the contrastive loss with respect to the embedding vector of w_j, where L is the contrastive loss between two augmentations T(x_i)_1 and T(x_i)_2, calculated by Eq.<ref> and Eq.<ref>. We then define the importance score I_w_j for w_j as the 1-norm of the gradient vector ∇_w_jL, i.e., I_w_j = ‖∇_w_jL ‖_1. A larger importance score I_w_j for w_j indicates a more significant contribution of the corresponding token to the change in the contrastive loss. ablation demonstrates the effectiveness of this important score ranking method. The tokens with the largest importance scores will be identified as the attack positions.
Next, we rank the words in T(x_i)^c_1 based on their importance scores in descending order and build an attack set 𝐴𝑇𝑇. Note that for each sequence, we steadily pick the ϵ most important words to form 𝐴𝑇𝑇 since we lack ground-truth labels to determine when to stop the attack.
Generate Label-Free Adversarial Example (line 9-20). Recent works highly praised pre-trained mask language models in textual attacks, due to their ability to generate fluent and semantically consistent substitutions for input sentences <cit.>. Following <cit.>, we first use Bytes-Pair-Encoding (BPE) to tokenize T(x_i)^c_1 into sub-word tokens T(x_i)^sub_1 = (s_1, s_2, ..., s_m) and thus obtain 𝐴𝑇𝑇_sub, the sub-word tokenized counterpart of 𝐴𝑇𝑇. Since the tokens in 𝐴𝑇𝑇_sub are selected from T(x_i)^sub_1, we can yield K most possible substitutions for each token in 𝐴𝑇𝑇_sub by feeding T(x_i)^sub_1 into a pre-trained BERT
model.
Afterwards, we start to replace tokens. Given a target word w_p in 𝐴𝑇𝑇, if it is tokenized into sub-words in 𝐴𝑇𝑇_sub, we first calculate the perplexity scores of possible sub-word combinations, then rank these perplexity scores to extract the top-K combinations, which form a substitution candidate set C. Otherwise, substitutions generated in line 9 will consist of C. A synonym dictionary <cit.> is then applied to exclude antonyms from C since masked language models could not discriminate synonyms from antonyms. We do not filter out stop words to keep consistent with our data augmentation method (data augmentation), where stop words also act as potential substitutions.
Lastly, instead of iterating all the items, we randomly pick one item from the candidate set C to replace the target word w_p. This strategy is a trade-off between attack efficiency and effectiveness, since searching for optimal adversarial examples is time-consuming and thus impractical for pre-training. Nevertheless, as demonstrated by the experimental results in main, our attack strategy can still generate representative label-free adversarial examples for SCAT to learn robust representations. We also tested the effectiveness of our label-free attack and results are mentioned in Appendix <ref>. After going through all the words in 𝐴𝑇𝑇, we obtain the final label-free adversarial example x_i^adv. In addition, we take the contrastive loss between x_i^adv and the other clean augmentation as an extra regularizer in the final training objective (Eq.<ref>).
§.§ Learning Objective
We can now formulate our learning objective. Specifically, we add x_i^adv to the original positive set { x_i_pos} so the final training set B will include 3N examples due to the expansion of { x_i_pos}. Then we learn robust representations by minimizing the contrastive loss between two augmentations and x_i^adv. Using the same settings in pre, we first define the contrastive objective with three samples in { x_i_pos} following Eq.<ref> and Eq.<ref>:
L_CL_3( T(x_i)_1, T(x_i)_2, x_i^adv) = L_ĈL̂(T(x_i)_1, T(x_i)_2)
+ L_ĈL̂(T(x_i)_1, x_i^adv)
+ L_ĈL̂(T(x_i)_2, x_i^adv)
Note that L_ĈL̂ is a modification of L_CL while the size of the set B is increased to 3N in Eq.<ref>. The final objective for the pre-training part can thus be calculated as:
L_CL_adv = L_CL_3(T(x_i)_1, T(x_i)_2, x_i^adv)
L_Reg = L_CL(x_i^adv, T(x_i)_2)
L_Final = L_CL_adv + λ L_Reg,
where { T(x_i)_1, T(x_i)_2, x_i^adv} forms the new positive set, augmentations of the other instances in the same mini-batch correspond to the negative set, and λ is used to control the regularization strength.
§.§ Linear Evaluation
Since pre-trained representations cannot be used for text classification directly, we follow the existing self-supervised learning method (<cit.>) to leverage an MLP layer at the top of the encoder E's outputs to predict the labels for different input examples. We train this MLP layer on the original training set in a supervised manner while fixing all the parameters of E. The cross-entropy loss is adopted as the optimization objective here. While this simple, efficient linear evaluation method helps SCAT achieve promising robustness (as shown in Table <ref>), we also analyzed another conventional evaluation strategy: fine-tuning the whole model. Details can be found in Appendix <ref>.
§ EXPERIMENTS
§.§ Setup
Datasets. We evaluate our method on two representative text classification datasets: AG's News (AG) and DBPedia <cit.>. Detailed description and statistics of these two datasets are included in Appendix <ref>. For AG, we perform experiments on the 1k test examples collected by <cit.> since their data splits have been widely used. For DBPedia, we randomly selected 1k samples from its original test set for experiments. Since both AG and DBPedia do not provide a validation set, we randomly picked 2k instances from their original test sets for validation, while not overlapping with the 1k test examples.
Configuration. During pre-training, we adopt two backbone encoders: 1) base sized BERT (BERT_base, <cit.>); 2) randomly initialized Transformer <cit.> with the same architecture as BERT (12 layers, 12 heads, and hidden layer size of 768).
More implementation details regarding the Adv-Generator, projector, optimization, linear evaluation, and data pre-processing are described in Appendix <ref>.
Attackers. In our experiments, we use two state-of-the-art attackers to evaluate model robustness: TextFooler and BERT-Attack. See Appendix <ref> for detailed descriptions of these two attackers. We implement the two attackers following their publicly released versions. For TextFooler, the thresholds of both synonym and sentence similarity scores are set to 0.5
, according to its authors' instructions. As for BERT-Attack,
on AG, <cit.> report results without filtering out antonyms from the potential substitutions. We instead do extra evaluations on both datasets with the antonym filtering process for fair comparison, since masked language models could not distinguish synonyms from antonyms.
Evaluation Metrics. To measure the robustness of SCAT from different perspectives, we introduce several evaluation metrics: (1) Clean Accuracy (Acc): model's accuracy score on clean examples; (2) After-Attack Accuracy (Atk Acc): model's accuracy score after being attacked; (3) Attack Failure Rate (AFR): percentage of adversarial examples that fail to change the model's prediction.
For all the metrics especially the last two core measures, higher values indicate better performance.
Baseline Models. For each backbone encoder mentioned in configuration, we perform experiments with two different baselines. Taking BERT as an example, these baselines are (1) BERT: a base model trained with clean examples in a supervised manner; (2) CL-BERT: a self-supervised contrastive learning-based model pre-trained without using extra adversarial examples. To reduce the effect of randomness brought by data augmentation and label-free adversarial example generation during pre-training, models including these steps were run with three different random seeds and we report the average metric scores in all the experiments.
* BERT: a base model trained with clean examples in a supervised manner.
* CL-BERT: a self-supervised contrastive learning-based model pre-trained without using extra adversarial examples.
§.§ Main Results
SCAT Improves Robustness. tab:main results summarizes main results on the two datasets. As we can see, supervised models encounter serious performance drop against the two attackers. By contrast, SCAT improves the robustness of these standard models, demonstrated by the significantly higher results across the board. For both datasets, SCAT-BERT and SCAT-Transformer outperform BERT and Transformer by an average attack failure rate of 24.9% against TextFooler and 24.1% against BERT-Attack.[The calculation process is detailed in Appendix <ref>.]
Since the key idea behind the two attackers differs, these impressive results illustrate the possibility to train a robust model that can defend against different types of attacks. It is worth noting that SCAT does not sacrifice the clean accuracy much compared to the supervised models, and the average drop is only 2.9%. This result is surprising since SCAT’s pre-training stage is fully label-free and it further
suggests that SCAT can still generate high-quality representations for downstream tasks while largely improving model robustness.
Comparison between SCAT and CL. Interestingly, we observe that using contrastive learning alone usually enhances model robustness,
which proves that our data augmentation method helps models adapt to potential perturbations to some certain extent. Nevertheless, compared to CL, SCAT has more benefits. First, as shown in tab:main results, SCAT has a significantly stronger robustness performance. For example, for the two robustness-related metrics, SCAT models outperform CL models by 10.9%, 11.6% on AG and 5.0%, 15.8% on DBPedia respectively against TextFooler.
Moreover, SCAT can defend against high-quality adversarial examples better, demonstrated by its higher metric scores against the attackers with strict semantic constraints like TextFooler and BERT-Attack with antonym filtering. As shown in Table <ref>, SCAT enhances Transformer and BERT's performance with respect to the after-attack accuracy by an average of 31.7% against BERT-Attack with antonym filtering.[The calculation process is detailed in Appendix <ref>.]
Although CL-BERT outperforms SCAT-BERT when skipping the antonym filtering process under BERT-Attack on AG, adversarial examples obtained here may not be fluent and could be easily identified by heuristic rules due to the potential misuse of antonyms during token substitution. In contrast, for high-quality adversarial examples that are hard to be detected in advance, SCAT-BERT can defend them better than CL-BERT due to the robust pre-training stage.
Results on Pre-trained Encoder. Another advantage of SCAT is that except for training robust models from scratch, it can also perform robust fine-tuning for pre-trained language models without modifying their structures. For example, SCAT-BERT enhances the after-attack accuracy of BERT by 28.5% and 28.2% on both datasets against TextFooler, while only decreasing 3.1% and 0.9% of the clean accuracy. These results further indicate that SCAT fits well for the current trend of fine-tuning huge pre-trained language models.
§.§ Ablation Study
To assess the effectiveness of different modules in SCAT, we conduct an ablation study on AG with two additional baseline models (for each encoder). The first model does not utilize the Adv-Generator. Instead, it creates an extra augmentation for a given example and adds this extra sequence to the positive set as the label-free adversarial example. The other model removes the gradient-based ranking step in Algorithm <ref> and determines the attack positions randomly. Table <ref>, <ref> show the results. Since the comparison between SCAT, CL and supervised models has been discussed
in main, we focus on comparing SCAT with the two new baselines here.
Effect of Ppositive Sset Ssize. We first compare SCAT against the baseline (CL+Extra Aug) that contrasts three random augmentations. As can be seen in tab:set size, SCAT nearly outperforms this baseline across the
board, indicating that label-free adversarial examples generated by the Adv-Generator (adv-gen) do highly contribute to the improvement of model robustness. The only exception appears when defending BERT-Attack without antonym filtering, which is not surprising since this baseline is an augmented version of CL. As mentioned in main, CL-based models are more robust to lower-quality adversarial examples that can be easily identified, since their representations are merely learned from random augmentations and should be more familiar with unnatural token substitutions.
Effect of Ggradient-Bbased Rranking. We now switch to the ranking method in the Adv-Generator. As shown in tab:gradient ranking, replacing the ranking process with random selection (SCAT(Random) leads to a drop in robustness
against TextFooler. As for BERT-Attack, SCAT outperforms SCAT(Random) with Transformer, while SCAT(Random) has better robustness results for the BERT model, probably because attack positions determined by the gradient of contrastive loss are more consistent with those used in TextFooler. While assigning attack positions, TextFooler separately considers the situation if deleting one token from the original input changes the target model's prediction compared to BERT-Attack. Overall, these results justify our choice of applying the gradient-based ranking strategy in the Adv-Generator.
§.§ Combining SCAT with Supervised Adversarial Training
Existing works <cit.> show that adversarial examples generated by attackers can be used to enhance model robustness via supervised adversarial training.
In this part, we further evaluate the flexibility of SCAT by training it with labeled adversarial examples. Specifically, we follow <cit.> to expand AG's training set using labeled adversarial examples crafted by TextFooler and BERT-Attack. Since it will be time-consuming to attack a specific victim model on the whole training set, we only pick BERT and one run of SCAT-BERT for comparison. The moderate standard deviation scores for different runs of SCAT-BERT in tab:mean justify this setting.
When testing BERT, we fine-tune a new BERT from scratch on the expanded dataset. For SCAT-BERT, we perform linear evaluation on the expanded dataset while fixing the pre-trained encoder.
tab:flexibility lists the results on AG's test set used earlier. As can be seen, performing adversarial training directly using the generated adversarial examples does help BERT defend against attacks better. However, the performance gap between BERT+Adv and SCAT-BERT is still significant, confirming the effectiveness of our method. Notably, adding labeled adversarial examples to the linear evaluation part further boosts both the robustness scores and the clean accuracy of SCAT-BERT, with an average improvement of 6.3% on the after-attack accuracy and 6.7% on the attack failure rate among all attack types. Since simply training a linear classifier might not make full use of the extra information, other potential methods such as pre-training on the expanded dataset using SCAT should further improve the results. This again illustrates the flexibility of SCAT and largely widens its application scope.
§ CONCLUSION
In this work, we propose a novel adversarial training framework named SCAT, which can learn robust textual representations in a fully label-free manner. We first come up with a token substitution-based method to craft adversarial examples from augmentations of the data without requiring ground-truth labels. Next, we implement adversarial training via minimizing the contrastive loss between the augmentations and their adversarial counterparts. In the experiments, we adopt two state-of-the-art attack algorithms on two text classification datasets to evaluate the robustness of different models. Our experimental results demonstrate that SCAT can both train robust language models from scratch and improve the robustness of pre-trained language models significantly. Moreover, we show the effectiveness of different modules of SCAT through an ablation study. Finally, we illustrate the flexibility of SCAT by combining it with supervised adversarial training, motivating further research in this area.
§ ETHICAL CONSIDERATIONS
Our proposed SCAT framework can be used to improve the robustness of existing text classification systems. Since SCAT is very effective and flexible, it can be widely used in the NLP community. Nevertheless, like other defense methods, SCAT can lead to a slight decrease in the clean accuracy for classification tasks. This trade-off between accuracy and robustness has to be taken into consideration in the context of the specific application when trying to decide whether to incorporate a defense scheme such as SCAT.
acl_natbib
§ ADVERSARIAL TRAINING
We first introduce the key idea of adversarial training here. For continuous-valued image data, <cit.> proposed to perform adversarial training via solving a min-max optimization problem:
θ min E_(x,y)∼D[δ_∞≤ϵ max l(θ, x+δ, y)]
where θ, δ, and l denote the model parameters, projected perturbations and loss function, respectively, and (x, y) is a labeled instance from the training set D. When turning to the text domain, existing works usually follow this optimization scheme with discrete adversarial examples.
§ COMPARISON BETWEEN DIFFERENT DATA AUGMENTATION METHODS
In our preliminary experiments on AG, we considered two types of augmentation methods. The first type is the strategy in data augmentation. The other type is synonym substitution, where a synonym set for each token in vocabulary V is extracted from a synonym dictionary following steps and configurations in <cit.>. For each token in the input sentence, we also transform it using Eq.<ref> while w^'_i is picked from the token's synonym set. If the token does not appear in the synonym dictionary, we will not replace this token. For both augmentation methods, we perform contrastive pre-training and linear evaluation and use two attackers to attack them like main.
Table <ref> visualizes the results of our preliminary experiments. We observe that synonym substitution-based augmentations could not lead to good representations compared to random token substitution, especially when pre-training a transformer encoder from scratch. As can be seen, Synonym-CL-Transformer only achieves 34.0% clean accuracy scores. This may be because although we can generate two semantic coherent augmentations for contrastive learning using synonym substitution, the high-dimensional representations of these two augmentations are too similar for our model to learn enough linguistic features from the given input sentence. Moreover, Synonym-CL based models are more likely to overfit the synonym set they used in pre-training, which will seriously hurt their generalization abilities. As shown in Table <ref>, Although Synonym-CL-Transformer and Synonym-CL-BERT have strong robustness performances against TextFooler, their abilities to defend BERT-Attack are quite weak. On the contrary, CL-Transformer and CL-BERT have moderate scores against different types of attack. Based on the above reasons, we choose random token substitution as our data augmentation method in this work. But we do agree that random token substitution might not be the most suitable strategy and regard finding a better data augmentation method as a future research direction.
§ FURTHER DETAILS ON THE CONFIGURATION
§.§ Use of Different Tokenizers
In this work, we adopt two tokenizers for different purposes. The first one is the NLTK[<https://www.nltk.org/>] word tokenizer, which is used to tokenize sentences in a given dataset to build the vocabulary V in data augmentation. It is also adopted to tokenize a given sentence before the token substitution step in Adv-Generator starts. The BERT tokenizer provided by Hugging Face[<https://huggingface.co/transformers>] (uncased version) is used in the remaining tokenization scenes. The maximum sequence length is set to 128 for both datasets.
§.§ Pre-Training
Recall that during pre-training, we adopt two different backbone encoders: 1) base sized BERT (BERT_base); 2) randomly initialized Transformer with the same architecture as BERT (12 layers, 12 heads, hidden layer size of 768). During data augmentation, the replacing probability p is set to 0.3. Regarding our Adv-Generator, we set ϵ to 30%
and the number of candidates K to 48. For the projector, we take a 2-layer MLP module to project the encoder's output to a 128-dimensional hidden space. As for optimization, we use AdamW with a learning rate of 5e-5 and weight decay of 1e-6. In the final loss, we adopt 0.5 for the temperature parameter τ and 1/256 for λ. On AG, each model is pre-trained for 50 epochs with a batch size of 32, while the first 3 epochs act as the warm-up session. On DBPedia, since its training set is about 5 times larger than AG, to be consistent on the overall training steps, each model is pre-trained for 10 epochs while the first epoch acts as the warm-up session. For supervised baselines trained on clean examples, the number of warm-up epochs is set to 0. The batch size is selected from { 16, 32, 64, 128 } based on the robustness performance. The three random seed numbers are 2020, 2010, and 2000 respectively. SCAT takes about 6 days to train 50 epochs on AG and 10 epochs on DBPedia using one RTX 3090 GPU. All the codes are implemented with PyTorch. [<https://pytorch.org/>]
§.§ Linear Evaluation
At the linear evaluation stage, we train the 1-layer linear classifier on top of the encoder E's outputs for 100 epochs with a batch size of 128, then pick the epoch that performs best on the validation set for the final usage. Optimization is done using AdamW with a value of 5e-4 for both the learning rate and weight decay parameters.
§ PERFORMANCE OF OUR LABEL-FREE ATTACK
In adv-gen, we design a label-free method to dynamically craft adversarial examples during pre-training using the gradient of contrastive loss. In this section, we provide a more comprehensive view of our proposed attack by evaluating it with supervised Transformer and BERT. Results are listed in Table <ref>.
As can be seen, our attack successfully hurt supervised models' robustness to a certain extent. Although the power of our label-free attack is not as strong as state-of-the-art attacking methods, as mentioned in adv-gen, this is a trade-off between attack efficiency and effectiveness due to our limited computational resources. Nevertheless, these label-free adversarial examples still highly improve model robustness as illustrated in exp.
§ ANALYSIS OF EVALUATION METHODS FOR SCAT
In our experiments, we train a linear layer on top of the fixed SCAT encoder to perform linear evaluation for downstream tasks, following the standard setting of existing self-supervised learning models. While this simple method achieves great performance as illustrated in exp, there also exists another popular evaluation protocol in the NLP community, which is to fine-tune the whole model for the downstream task. In this section, we test the robustness of these two evaluation schemes against two attackers. Similar to sup-adv, we only pick one run of SCAT-Transformer and SCAT-BERT for comparison. For the fine-tuning method, we fine-tune the pre-trained SCAT encoder for 50 epochs with a learning rate of 2e-5, then pick the epoch that performs the best on the validation set for the final usage. Results are shown in Table <ref>.
As shown in Table <ref>, it is not surprising that fine-tuning leads to the highest clean accuracy score across the board, since it optimizes all the model parameters simultaneously after SCAT pre-training. However, the clean performance gap between fine-tuning and linear evaluation is only 3.7% on average. In contrast, linear evaluation outperforms fine-tuning under all the robustness-related metrics with large gaps. For example, linear evaluation beats fine-tuning by an average after-attack accuracy of 16.7%, probably because during fine-tuning, robust parameters in the pre-trained SCAT encoder will be updated to fit the clean examples, while linear evaluation will not modify these robust parameters. Moreover, linear evaluation's computational demand is much smaller than fine-tuning, especially for huge pre-trained language models with billions of parameters. For the above reasons, we choose to adopt linear evaluation for our experiments and we regard how to better combine fine-tuning with our SCAT framework as a future research direction to pursue.
§ DESCRIPTIONS AND STATISTICS OF DATASETS
In this section, we provide the detailed descriptions and statistics of the two datasets we used in experiments:
* AG's News (AG): Sentence-level classification task with 4 news-type categories: World, Sport, Business, and Science/Technology <cit.>. For each news article, we concatenate its title and description field as the input for our encoders following the setting of <cit.>.
* DBPedia[<https://www.dbpedia.org/>]: Extracted from Wikipedia, DBPedia is a sentence-level classification dataset containing 14 non-overlapping categories <cit.>. We also concatenate the title and abstract as the input for each Wikipedia article.
tab:dataset summarizes the statistics of the two datasets.
§ DETAILED DESCRIPTIONS OF TWO ATTACKERS
* TextFooler[< https://github.com/jind11/TextFooler>]: Proposed by <cit.>, given a sentence, TextFooler replaces the tokens with their synonyms extracted from the counter-fitting word embedding <cit.>.
It also uses similarity scores to filter out the low-quality examples to control the consistency between the generated adversarial examples and their original counterparts.
* BERT-Attack[<https://github.com/LinyangLee/BERT-Attack>]: Another powerful token replacement-based attack designed by <cit.>. For each target token, BERT-Attack first applies a pre-trained BERT to generate all potential replacing candidates, then substitutes the tokens greedily based on the change of model predictions.
§ DETAILS ABOUT THE CALCULATION OF AVERAGE RESULTS IN MAIN
In main, we mention that “For both datasets, SCAT-BERT and SCAT-Transformer outperform BERT and Transformer by an average attack failure rate of 24.9% against TextFooler and 24.1% against BERT-Attack”. For clarity, we provide details of the calculation here. For TextFooler, looking at rows 1, 3, 4, and 6 of Table <ref>, we have 24.9%≈ [(28.1-0.3) + (49.0-17.4) + (19.3-8.1) + (49.0-20.2)] / 4. For BERT-Attack, looking at rows 7, 9, 10, and 12, we have 24.1%≈ [(19.8-0.5) + (48.6-3.7) + (34.5-20.8) + (63.7-43.8) + (11.2-0.6) + (49.1-4.7) + (27.7-16.9) + (75.5-46.4)] / 8.
In main, we also mention that “As shown in Table <ref>, SCAT enhances Transformer and BERT's performance on after-attack accuracy by an average of 31.7% against BERT-Attack with antonym filtering”. Looking at rows 7, 9, 10 and 12 of Table <ref>, we have 31.7%≈ [(44.0-3.4) + (45.7-4.6) + (58.7-41.7) + (74.3-46.1)] / 4.
§ COMPLETE RESULTS OF THE MAIN EXPERIMENT
While we only report the average results for the main experiment in Table <ref>, we further report the complete results here for a more complete view of the stability of different models. tab:complete main results summarizes all the results.
§ COMPLETE RESULTS OF THE ABLATION STUDY
Similar to what we have done for the main experiment, we also report the complete results of the ablation study here for reference. tab:complete ablation summarizes all the results.
§ MEAN AND SSTANDARD DDEVIATION (SD) OF SCAT-BERT'S RRESULTS
In this section, we list the mean and standard deviation of SCAT-BERT's results among its three runs on AG in Table <ref> as a supplement to sup-adv.
|
http://arxiv.org/abs/2307.03167v1
|
20230706174827
|
Risk-Averse Trajectory Optimization via Sample Average Approximation
|
[
"Thomas Lew",
"Riccardo Bonalli",
"Marco Pavone"
] |
cs.RO
|
[
"cs.RO",
"cs.SY",
"eess.SY",
"math.OC"
] |
Risk-Averse Trajectory Optimization
via Sample Average Approximation
Thomas Lew^1, Riccardo Bonalli^2, Marco Pavone^1
0.9^1Department of Aeronautics and Astronautics, Stanford University
0.9^2Laboratory of Signals and Systems, University of Paris-Saclay, CNRS, CentraleSupélec
August 1, 2023
============================================================================================================================================================================================================================
Trajectory optimization under uncertainty underpins a wide range of applications in robotics.
However, existing methods are limited in terms of reasoning about sources of epistemic and aleatoric uncertainty, space and time correlations, nonlinear dynamics, and non-convex constraints.
In this work,
we first introduce a continuous-time planning formulation with an average-value-at-risk constraint over the entire planning horizon.
Then, we propose a sample-based approximation that unlocks an efficient, general-purpose, and time-consistent algorithm for risk-averse trajectory optimization.
We prove that the method is asymptotically optimal and derive finite-sample error bounds.
Simulations demonstrate the high speed and reliability of the approach on problems with stochasticity in nonlinear
dynamics, obstacle fields, interactions, and terrain parameters.
§ INTRODUCTION
Accounting for uncertainty in the design of decision-making systems
is key to achieving reliable robotics autonomy.
Indeed, modern autonomy stacks account for uncertainty <cit.>, whether it comes from
noisy sensor measurements (e.g., due to perceptually-degraded conditions or a lack of features <cit.>),
dynamics (e.g., due to disturbances and difficult-to-characterize nonlinearities <cit.>),
properties of the environment (e.g., due to unknown terrain properties for legged robots <cit.> and Mars rovers <cit.>), or
interactions with other agents (e.g., in autonomous driving <cit.>).
Although trajectory optimization under uncertainty underpins a wide range of applications,
existing approaches
often make simplifying assumptions and approximations that reduce the range of problems they can reliably deal with. Specifically, there is a lack of methods capable of simultaneously handling
* sources of aleatoric uncertainty (e.g., external disturbances) and epistemic uncertainty (e.g., a drone transporting a payload of uncertain mass that introduces time correlations over the state trajectory)
that depend on state and control variables (e.g., interactions between different agents),
* uncertainty of arbitrary (non-Gaussian) distribution correlated over time and space (e.g., uncertain terrain properties),
* uncertain nonlinear dynamics and non-convex constraints,
* constraints that
bound the risk of constraints violations over the entire duration of the problem while satisfying time-consistency, i.e., the computed solution should not be sensitive to the choice of time discretization <cit.>.
Table <ref>This work was supported by the NASA University Leadership Initiative (grant#80NSSC20M0163) and the Air Force under an STTR award with Altius Space Machines, but solely reflects the opinions and conclusions of its authors. summarizes the capabilities of existing methods.
Contributions:
We introduce an efficient risk-averse trajectory optimization algorithm satisfying the previous desiderata:
* First, we propose a risk-averse planning formulation with average-value-at-risk () constraints enforced over the entire planning horizon.
This formulation is applicable to a wide range of robotics problems with sources of aleatoric and epistemic uncertainty.
Its continuous-time nature ensures time-consistency, guiding the design of discretization-independent algorithms (see Remark <ref>).
Enforcing constraints enables accounting for tail events and facilitaties numerical resolution due to their convexity properties (see Remark <ref>). In addition, solving this formulation yields feasible solutions to notoriously challenging planning problems with joint chance constraints (see Remark <ref>).
* Second, we propose a sample-based approximation rooted in the sample average approximation approach <cit.>.
We derive asymptotic optimality guarantees (Theorem <ref>) and finite-sample error bounds (Lemma <ref>) for this reformulation.
The analysis relies on mild assumptions (1-4), which enables considering a wide range of uncertainty sources that introduce spatial and temporal correlations and depend on state and control variables. The resulting approximated problem is smooth and sparse, facilitating efficient numerical resolution using off-the-shelf optimization tools.
* Finally, we show the reliability and speed of the proposed approach on problems with uncertain nonlinear
dynamics, obstacle fields, interactions, and terrain parameters.
1.
This work indicates that risk-averse planning problems can be efficiently tackled via trajectory optimization. These findings challenge the popular belief that Monte-Carlo-based planning methods are computationally expensive <cit.>.
The key is a particular -constrained formulation which, when coupled with a sample-based approximation and a smooth, sparse reformulation,
unlocks the use of readily available optimization tools that enable efficient numerical resolution.
§ RELATED WORK
Risk-averse control, planning, and trajectory optimization methods typically enforce chance constraints or average-value-at-risk () constraints.
constraints are more conservative than chance constraints, as they account for tail events,
and have desired properties for robotics applications <cit.>.
Two types of risk constraints are popular in the literature.
First, pointwise risk constraints enforce individual constraints
at each time t∈[0,T],
and thus do not provide sufficient constraints satisfaction guarantees over the entire time horizon [0,T], see Remark <ref>.
Instead, it is preferable to enforce joint risk constraints that should hold jointly at all times t∈[0,T].
To enforce such joint risk constraints, many approaches bound the overall risk of constraints violations using Boole's inequality <cit.>,
which neglects time correlations of uncertainty and yields solutions that are not time-consistent, i.e., they are sensitive to the chosen time discretization <cit.>).
The main challenge in risk-averse planning and control lies in the ability of efficiently evaluating
the risk of constraints violations
(e.g., the probability or the of constraints violations over the planning horizon).
For instance, estimating the joint probability of collision of a robotic system until it completes a task requires accounting for the effects of disturbances across multiple timesteps, and
generally requires evaluating an expectation integral
over time and space, which is challenging for general uncertainty distributions and constraints.
For this reason, many methods in the literature assume independent Gaussian-distributed random variables defining the problem <cit.>,
polytopic constraints sets <cit.>,
or use Boole's inequality to distribute risk over different constraints (e.g., obstacles) <cit.> which neglects correlations of uncertainty over the statespace.
These are reasonable assumptions and approximations in some applications. However, these approaches may lead to infeasibility, under-estimate the true risk of constraints violation, or hinder performance.
There is a lack of formulations and solution algorithms capable of truly capturing different sources of uncertainty
, see Table <ref>.
Monte-Carlo methods fulfill the desideratas introduced in Section <ref>, but are often considered to be computationally expensive <cit.>.
Existing Monte-Carlo risk-averse trajectory optimization and planning approaches are limited to problems with randomly-moving obstacles of fixed polytopic shapes for deterministic linear systems <cit.>, or require solving multiple problems for different paddings of the obstacles as a function of the constraints violation probability estimated from samples <cit.>. Both approaches are computationally expensive: the first requires using mixed-integer programming,
and the latter requires solving multiple instances of the problem <cit.>. Methods based on the scenario approach also use samples to enforce chance constraints <cit.>, but they are limited to convex problems (e.g., to systems with linear dynamics).
Our proposed risk-constrained problem formulation is more general than in prior work (see Table <ref>), capturing a broad range of trajectory planning problems.
In particular, in Section <ref>, we study a problem with uncertain nonlinear dynamics, sources of epistemic and aleatoric uncertainty, and moving obstacles, and a legged robot navigating over uncertain terrain whose friction coefficient varies over space.
Importantly, our approach is amenable to real-time implementation,
thanks to our -constrained formulation
approximated using samples,
unlocking the use of efficient optimization tools.
§ RISK FUNCTIONS
Let (Ω,,) be a probability space <cit.> and Z:Ω→ be a random variable encoding constraints of the form Z≤ 0. For instance, Z may denote the minimum negative distance to obstacles and Z≤ 0 may denote obstacle avoidance constraints, see Figure <ref>. In practice, enforcing constraints with -probability one (i.e., Z(ω)≤ 0 for -almost all ω∈Ω) is infeasible, e.g., if disturbances are Gaussian-distributed and have unbounded support. Thus, we may enforce risk constraints instead.
We define the Value-at-Risk () and Average Value-at-Risk ()[The is also often referred to as the Conditional Value-at-Risk () in the literature <cit.>, since _α(Z)=[Z|Z≥_α(Z)] under certain regularity assumptions <cit.>.] at risk level α∈(0,1) as
_α(Z) =
inf_t∈(t : (Z>t)≤α),
_α(Z) =inf_t∈(
t+1/α[max(Z-t,0)]
).
_α(Z) is the (1-α)-quantile of Z (Z>_α(Z) with probability less than α) and _α(Z) is the expected value of values of Z larger than _α(Z) <cit.>.
These risk functions yield two types of inequality constraints. First, the _α gives chance constraints at probability level α:
_α(Z)≤ 0
(Z>0)≤α.
The gives conservative formulations of chance constraints since _α(Z)≤_α(Z) <cit.>:
_α(Z)≤ 0
(Z>0)≤α.
§ PROBLEM FORMULATION
We model the uncertain system in continuous-time using a stochastic differential equation (SDE).
Let (Ω,,,) be a filtered probability space <cit.>,
W be a n-dimensional Brownian motion on Ω <cit.> (e.g., the standard Wiener process), ξ:Ω→^q be a random variable representing q∈ uncertain parameters,
n,m∈ be state and control dimensions,
x_0:Ω→^n be uncertain initial conditions,
U⊂^m be a compact control constraint set, T>0 be the planning horizon,
b:^n× U×^q→^n and σ:^n× U×^q→^n× n
be uncertain drift and diffusion coefficients. Given a control trajectory u:[0,T]→ U, we define the SDE
x(t)
=b(x(t),u(t),ξ)+σ(x(t),u(t),ξ) W_t,
t∈[0,T],
x(0) =x_0,
SDE
with solution x_u.
Given running and final costs ℓ:^n× U→ and
φ:^n→,
N∈ inequality and n_h∈ equality constraints functions
G_j:^n×^q→ and H:^n→^n_h,
a risk parameter α∈(0,1),
and a control space
(see Sections <ref>-<ref>),
we define the optimal control problem ():
:
inf_u∈ [
∫_0^Tℓ(x_u(t),u(t)) t
+
φ(x_u(T))
]
s.t. _α(sup_t∈[0,T]
G(x_u(t),ξ)
)≤ 0,
[H(x_u(T))]=0,
x_u satisfies ,
where G(x_u(t),ξ)=max_j=1,…,N G_j(x_u(t),ξ) is the maximum constraints violation at time t and we implicitly omit dependencies
on uncertainties ω.
This formulation of captures a broad range of robotics applications, see Section <ref> for examples.
is challenging to solve due to the non-convexity of G, the risk constraint (<ref>), the non-Gaussianity of the state trajectory x_u, and the dependency of all quantities on both epistemic uncertainty (modeled by the uncertain parameters ξ) and aleatoric uncertainty (modeled with the SDE in (<ref>)).
By (<ref>), the constraint (<ref>) in is equivalent to
t+1/α[
max(
sup_s∈[0,T]
G(x_u(s),ξ)-t,
0)
]≤ 0,
so all constraints in are
expected value constraints. Thus, solving amounts to evaluating expectations, which is generally challenging as it involves a nonlinear SDE and general nonlinear constraints functions. We propose a tractable approximation in the next section. Below, we discuss important considerations motivating this formulation.
Solving yields feasible solutions to problems with joint CCs.
Indeed, given a constraint G(x_u(t),ξ) ≤ 0 that should hold at all times jointly with high probability 1-α, we can define the joint CC
(
sup_t∈[0,T]
G(x_u(t),ξ)>0
)≤α.
Thanks to (<ref>) and (<ref>), a conservative formulation of the joint CC (<ref>) is the corresponding constraint (<ref>) in .
Thus,
by appropriately defining G and solving , constraints that often appear in robotics can be enforced with high probability over the entire state trajectory, see Section <ref>.
Enforcing
_α(
G(x_u(t),ξ)
)≤ 0 ∀ t∈[0,T],
instead (e.g., as in <cit.>) does not bound the risk of constraints violation over the entire trajectory.
Similarly, pointwise-in-time chance constraints (e.g., as in <cit.>) do not bound the probability of constraints violations over the entire planning horizon. Further, transposing discrete-time risk-averse control strategies to full-horizon settings (e.g., via Boole's inequality <cit.>) does not lead to time-consistency,
i.e.,
the problem may become infeasible as the resolution of the time discretization increases, see <cit.> for further discussion.
Placing the time-wise supremum inside the constraint (<ref>) and accounting for time correlations is key to obtaining time-consistency and full-horizon constraints satisfaction.
[obstacle avoidance constraints]
N∈ℕ obstacles _j of uncertain positions and shapes to be avoided at all times can be captured via signed distance functions (SDFs)
d__j:^n×^q→, (x,ξ)↦ d__j(ξ)(x)
such that d__j(x_u(t))≥ 0 if and only if the system is collision-free,
i.e., x_u(t)∉_j.
For example, spherical obstacles at o_j of radii r_j can be described by d__j(x)=r_j-x-o_j.
The fact that d__j depends on randomness ω∈Ω via the uncertain parameters ξ(ω) allows capturing obstacles of uncertain position and shape.
For example, ellipsoidal obstacles centered at o_j of shape matrices Q_j can be encoded by the SDFs
d__j(ξ(ω))(x)=1-(x-o_j(ω))^⊤ Q_j(ω) (x-o_j(ω)).
Collision avoidance joint CCs
can be written as
(
x_u(t)∉_j(ξ)
∀ t∈[0,T]∀ j=1,…,N
)
≥ 1-α.
By defining the N constraints function G_j=-d__j, so that
G(x_u(t),ξ)
=
max_j=1,…,N(-d__j(ξ)(x_u(t))
),
the obstacle avoidance joint CC (<ref>) is equivalent to (<ref>).
Thus, by Remark <ref>, a conservative reformulation of (<ref>) is (<ref>):
_α(
sup_t∈[0,T]max_j=1,…,N(-d__j(ξ)(x_u(t))
)
)
≤
0.
§ SAMPLE AVERAGE APPROXIMATION (SAA)
Given M independent and identically distributed (iid) samples ω^i∈Ω, captured by the multi-sample =(ω^1,ω^2,…) with joint distribution , we approximate by the following sampled optimal control problem ():
_M()
inf_u∈
t∈ 1/M∑_i=1^M
∫_0^Tℓ(x_u^i(t),u(t)) t
+
φ(x_u^i(T))
s.t. t+
1/α M∑_i=1^M
max(
sup_s∈[0,T]
G(x_u^i(s),ξ^i)-t,
0)
≤ 0,
-δ_M≤
1/M∑_i=1^M
H(x_u^i(T))
≤δ_M,
x_u satisfies ,
where (x_u^i,ξ^i)=(x_u(ω^i),ξ(ω^i)) for i=1,…,M,
(<ref>) corresponds to n_h (pointwise) inequality constraints, and
δ_M>0 is a padding constant that decreases as the number of samples M increases (in practice, we set δ_M to a small value, see Theorem <ref>). This approximation is inspired from the sample average approximation (SAA) method <cit.>, which was recently extended to problems with equality constraints that often appear in robotics applications <cit.>. To the best of our knowledge, this approximation has not been applied to continuous-time problems taking the form of .
Given M samples ω^i∈Ω of the uncertainty, which define M sample paths of the Brownian motion W(ω^i), of the uncertain parameters ξ(ω^i), and of the initial conditions x_0(ω^i), the approximation _M() is a tractable deterministic trajectory optimization problem,
see Section <ref>. In the next section, we study the theoretical properties of this approach.
§ THEORETICAL ANALYSIS
Depending on the samples ω^i, the computed control trajectories u_M() that solve _M() will be different. What can we say about the quality of these solutions? We provide analysis that relies on the following mild assumptions.
A1(A1)
The drift and diffusion coefficients b and σ are continuous. Further, there is a bounded constant K≥ 0 such that for all x,y∈^n, all u,v∈ U, and all values ξ∈^q,
b(x,u,ξ)-b(y,v,ξ)
+
σ(x,u,ξ)-σ(y,v,ξ)≤
K(x-y+u-v).
A2(A2)
The cost and constraints functions (ℓ,φ,G,H) are continuous. Further. there is a bounded constant L≥ 0 such that for all x,y∈^n, all u,v∈ U, and all values ξ∈^q,
G(x,ξ)-G(y,ξ)+H(x)-H(y)≤ Lx-y,
G(x,ξ)+H(x)≤ L,
ℓ(x,u)-ℓ(y,v)
+
φ(x)-φ(y)≤
L(x-y+u-v).
A3(A3) The control space ⊂ L^2([0,T],U) can be identified with a compact subset of ^z for some z∈.
A4(A4)
The uncertain initial state x_0 and parameters ξ are _0-measurable and square-integrable.
1 is standard and guarantees the existence and uniqueness of solutions to .
2 corresponds to standard smoothness assumptions of the cost and constraints functions. The constraints functions G and H can always be composed with a smooth cut-off function whose support contains the statespace of interest to ensure the satisfaction of the boundedness condition in 2. 3 states that the control space is finite-dimensional, which is an assumption that is satisfied in practical applications once a numerical resolution scheme is selected. In particular, 3 holds for the space of stepwise-constant control inputs =(<ref>) that we use in this work.
4 makes rigorous the interpretation of x_0 and ξ as sources of epistemic uncertainty: 4 states that the uncertain initial state x_0 and parameters ξ are randomized at the beginning of the episode and are independent of the Brownian motion W.
To describe the distance between solutions to _M
and
to
, given non-empty compact sets A,B⊆, we define
(A,B)=sup_u∈ A_v∈ Bu-v.
As defined above, satisfies the property that A⊆ B if (A,B)=0. Thus, if we show that the solution sets S_M() and S of _M() of satisfy (S_M(),S)=0, then we can conclude that S_M()⊆ S, i.e., any optimal solution of _M() is an optimal solution of the original problem .
Theorem <ref> below states that this result holds with probability one
in the limit as the sample size M increases.
Given M∈ samples and any constants C>0 and ϵ∈(0,1/2), define δ_M=CM^(ϵ-1/2), and
denote the sets of optimal solutions to and _M() by S and S_M(), respectively.
Then, under assumptions 1-4, -almost-surely,
lim_M→∞(S_M(),S)=0.
The proof of Theorem <ref> relies on recent results in <cit.>, see the appendix.
Theorem <ref> gives convergence guarantees to optimal solutions to OCP (in particular, the constraint (<ref>) is satisfied) as the sample size increases
, justifying the proposed approach.
This asymptotic optimality result is time-consistent, i.e., independent of the chosen discretization scheme for numerical resolution.
This contrasts with approaches that start with a discrete-time formulation and could yield different results for different discretizations <cit.>.
The following result gives high-probability finite-sample error bounds for the average-value-at-risk constraint given a solution
of the sample-based approximation.
The proof (see the appendix) relies on concentration inequalities <cit.>.
Let ϵ>0, β∈(0,1), and M∈ be such than M≥ϵ^-2(C̃+h̅(2log(1/β))^1/2)^2 for some finite constants (C̃,h̅) large-enough.
Denote any solution to _M() by u_M().
Then, under assumptions 1-4,
_α(sup_s∈[0,T]
G(x_u_M(ω̅)(s),ξ)
)
≤ϵ
with -probability at least (1-β) over the M iid samples ω^i.
By Lemma <ref>, replacing constraint (<ref>) in _M(ω̅) with
t+1/α M∑_i=1^M
max(
sup_s∈[0,T]
G(x_u^i(s),ξ^i)-t,
0)
≤ -ϵ
would guarantee the satisfaction of the average-value-at-risk constraint in OCP if the sample size M is large-enough. Note that the error ϵ can be made arbitrarily small (with increasingly high probability 1-β) by increasing the sample size M.
In practice, the bound in Lemma <ref> is conservative, and numerical results demonstrate that a few samples are sufficient to obtain high-quality solutions to challenging planning problems, see Section <ref>.
§ NUMERICAL RESOLUTION
Theorem <ref> justifies approximating using samples ω^i and searching for solutions to the deterministic relaxation _M() instead. In this section, we describe a numerical method for efficiently computing solutions to _M().
Control space parameterization:
We optimize over open-loop controls u parameterized by S∈ stepwise-constant inputs u_s of duration Δ t=T/S, described by the set
={u:
[ u(t)=∑_s=0^S-1u̅_s1_[sΔ t,(s+1)Δ t)(t),; (u_0,…,u_S-1)∈ U×…× U ]}.
This set clearly satisfies 3, since it can be identified with a compact set of ^Sm. Note that any square-integrable open-loop control
u
can be approximated arbitrarily well by some u∈ by increasing S, so this class of function is expressive.
Alternatively, one could also optimize over certain classes of closed-loop controllers (e.g., controls of the form u=u̅+Kx);
an approach that is common in the literature
<cit.>.
Finite-dimensional approximation:
Numerically solving general instances of _M() requires discretizing the problem. In this work, we discretize _M() as follows:
_M()
inf_u∈
t∈ 1/M∑_i=1^M
(
∑_k=0^S-1ℓ(x_u^i(kΔ t),u(kΔ t))Δ t+
φ(x_u^i(SΔ t))
)
s.t. t+1/α M∑_i=1^M
max(
max_k=0,…,S
G(x_u^i(kΔ t),ξ^i) - t,
0)
≤ 0,
-δ_M≤1/M∑_i=1^M
H(x_u^i(SΔ t))
≤δ_M,
x_u^i((k+1)Δ t)=x_u^i(kΔ t)+b(x_u^i(kΔ t),u(kΔ t),ξ^i)Δ t+
+σ(x_u^i(kΔ t),u(kΔ t),ξ^i)(W_(k+1)Δ t^i-W_kΔ t^i)
∀ k=0,…,S-1,
i=1,…,M,
x_u^i(0)=x_0^i
∀ i=1,…,M,
where (x_0^i,ξ^i,W^i)=(x_0(ω^i),ξ(ω^i),W(ω^i)) denote the M samples of the initial conditions, parameters, and sample paths of the Brownian motion, and x_u^i(kΔ t)=x_u(kΔ t, ω^i) denotes associated sample paths of the state trajectory, for conciseness.
The constraint (<ref>) corresponds to an Euler-Maruyama discretization of (<ref>).
This discretization of allows for a simple and efficient implementation, although more accurate integration schemes could be used to formulate _M().
The SDE constraint (<ref>) can be either explicitly enforced by optimizing over both state variables x_u^i(kΔ t) and control variables u(kΔ t) and enforcing (<ref>), or implicitely by only optimizing over the control variables u(kΔ t) that parameterize the state trajectory via (<ref>). In this work, we opt for the latter as it reduces the number of variables, albeit at a potential reduction in numerical stability.
As shown in <cit.>,
the alternative option of parameterizing both state particles and control variables could also be computationally efficient.
By introducing M additional variables y_i∈, the inequality constraints (<ref>) are equivalent to the set of the constraints
(Mα)t+∑_i=1^My_i
≤ 0
0≤ y_i
∀ i=1,…,M
G_j(x_u^i(kΔ t),ξ^i)-t
≤ y_i
∀ i=1,…,M
∀ j=1,…,N
∀ k=0,…,S.
Note that these constraints are smooth if every G_j is smooth.
Numerical resolution: With this reformulation,
can be solved using reliable off-the-shelf optimization tools such as
<cit.>.
In this work, we leverage sequential convex programming (SCP). SCP consists of solving a sequence of convex approximations of until convergence.
The main appeal in using SCP is that it typically only requires a few convex approximations of the original non-convex program to reach accurate solutions.
Since the set of constraints
(<ref>) is sparse, the convex approximations of can be efficiently solved.
In contrast, interior-point-methods for non-convex programming may take a larger number of small steps that each require evaluating gradients and hessians of the original program.
Since this gradient-hessian evaluation is potentially the computational bottleneck in solving ,
SCP is a promising solution scheme for this class of problems.
See
<cit.> for further details on SCP for trajectory optimization.
Beyond the ability to account for tail events,
the smoothness of the reformulation in (<ref>) motivates enforcing constraints instead of chance constraints that are often used in the literature.
Indeed, a typical chance constraint (sup_tG(x_u(t),ξ)>0)≤α is equivalently written as
[1_(0,∞)(sup_tG(x_u(t),ξ))]≤α, where 1_(0,∞)(z)=1 if z>0 and 0 otherwise.
This constraint only takes a simple form in particular cases <cit.>.
Approximating this chance constraint from samples would give
1/M∑_i=1^M
1_(0,∞)(sup_tG(x_u^i(t),ξ^i))
≤α,
which is not smooth since 1_(0,∞)(·) is a step function. To solve the resulting problem with gradient-based methods, one would need to formulate and solve smooth approximations instead <cit.>, or solve the problem multiple times with different constraints paddings <cit.>, which is computationally expensive.
In contrast, the reformulation in (<ref>) is smooth and exact (up to errors from the sample-based and discrete-time approximations), which allows efficient numerical resolution.
Example <ref> (obstacle avoidance constraints).
With M samples of the state trajectory x_u^i and of the obstacles _j(ξ^i),
the constraint in (<ref>) can be approximated with
the set of constraints (<ref>) with G_j(x_u^i(kΔ t),ξ^i)=-d__j(ξ^i)(x_u^i(kΔ t)).
This formulation applies to general obstacle representations and can be specialized to particular obstacle shapes.
For example, if the N uncertain obstacles _j are spheres of uncertain radii r_j and centers o_j, then the last term in (<ref>)
is
r_j^i-x_u^i(kΔ t)-o_j^i-t
≤ y_i
for all
i,j,k, where (r_j^i,o_j^i)=(r_j,o_j)(ω^i)
are M iid samples.
If the obstacles are ellipsoidal as in (<ref>), then (<ref>)
becomes
(x_u^i(kΔ t)-o_j^i)^⊤ Q_j^i (x_u^i(kΔ t)-o_j^j)-1-t
≤ y_i
for all
i,j,k. This gives a set of differentiable constraints that can be passed to a non-convex optimization algorithm.
We note that potential corner-cutting due to the time discretization of the constraint is easily addressed via different methods, e.g.,
with the approach in <cit.>,
by enforcing (<ref>) on a finer grid, or by padding obstacles.
§ APPLICATIONS AND RESULTS
We apply the proposed approach to three challenging planning problems with diverse sources of uncertainty.
Code is available at 0.965<https://github.com/StanfordASL/RiskAverseTrajOpt>.
1) Drone planning with uncertain obstacles.
The state is x=(p,ṗ)∈^6,
the input is
u∈^3. Dynamics are given by
b(x,u,ω) =
[ ṗ; 0 ]+1/m(ω)(
β_drag[ 0; -|ṗ|ṗ ]+[ 0_3×3; I_3×3 ]
(u+Kx)
),
and σ(x,u,ω)=
1/m(ω)
(0_3×3,β_σ I_3×3)^⊤.
The drone transports an uncertain payload, modeled by assuming that the total mass m of the system follows a uniform distribution.
(β_drag,β_σ)
are drag and diffusion coefficients and K
is a feedback gain.
We consider three uncertain ellipsoidal obstacles whose shape matrices
have uncertain axes distributed according to a uniform distribution,
and enforce collision avoidance constraints as described in Example <ref>.
The objective is reaching a goal (H(x(T))=x(T)-x_g) while minimizing control effort ℓ(x,u)=u^⊤ Ru and
satisfying collision avoidance constraints at risk level α as in (<ref>).
This nonlinear system has sources of aleatoric (disturbances modeled as a Wiener process) and epistemic (mass and obstacles) uncertainty, which makes solving the problem with classical approaches challenging.
We formulate the sample-based approximation with S=20 nodes and solve it with SCP.
We present an example of results in Figure <ref> (M=50, α=10%) with the trajectory samples x_u^i obtained at convergence. We report the associated Monte-Carlo histogram of the negative minimum distance over t∈[0,T] in (<ref>) with the mean, _α and _α minimum values of (<ref>). The average-value-at-risk of collision is below 0, indicating that the solution of is feasible for .
We run the approach 30 times for different values of the risk parameter α each,
validate the satisfaction of the risk constraint using 10^5 Monte-Carlo samples, and report median results (over the 30 runs of the approach) in Table <ref>.
The _α constraint is empirically satisfied (i.e., _α≤0) for each value of α. As a result, associated joint chance constraints for the collision avoidance constraints are satisfied, since the percentage of constraints violations is always below α, validating the discussion in Remark <ref>.
Safer behavior is obtained for smaller values of α, effectively balancing the tradeoff between the efficiency and the risk of constraints violations. In contrast, a deterministic baseline that does not consider uncertainty often violates constraints.
R0.4
0.99
< g r a p h i c s >
Drone planning total computation times.
In Figure <ref>, we report total computation times for different sample sizes M∈{20,30,50} (we report the median over 30 runs with α=5%, evaluated on a laptop with an i7-10710U CPU (1.10 GHz) and 16 GB of RAM). We use a zero initial guess u̅_s=0 and stop after 10 SCP iterations, which is sufficient to obtain a final SCP iteration error u^k-u^k-1/u^k≤ 1%. Albeit our implementation is written in Python (with JAX <cit.> and <cit.>), computation times are reasonable and amenable to real-time applications.
Computation time scales roughly linearly in the sample size. Parallelization on a GPU could enable using a larger sample size M while retaining speed, albeit results show that using a small sample size suffices to obtain feasible solutions.
2) Autonomous driving with a pedestrian.
The state is
x=(x_ego, x_ped) with x_ego=(p^x_ego,p^y_ego,v_ego,ϕ_ego) the ego-vehicle and x_ped=(p^x_ped,p^y_ped,v^x_ped,v^y_ped) a pedestrian, and u=(a,τ) the ego control input. The coupled system follows
x_ego=(v_egocos(ϕ_ego),v_egosin(ϕ_ego),u),
p_ped
=
v_ped, and v_ped
=
f(x,ω)
+
σ W_t,
from (x_ego,0,x_ped,0) with known x_ego,0 but Gaussian-distributed x_ped,0 due to uncertainty from perception. The term f(x,ω) represents interaction forces inspired from the Social Force Model <cit.>
f(x(t),ω)=
ω_1(v_ped^des-v_ped(t))e_ped
+
ω_2p_ego(t)-p_ped(t)/p_ego(t)-p_ped(t)
where (ω_1,ω_2) represents the tendency of a pedestrian to maintain a desired speed v_ped^des in the direction e_ped and actively avoid the car. A pedestrian with large values of ω_1 and small values of ω_2 tends to keep the same speed while neglecting the car. (ω_1,ω_2) represents a source of epistemic uncertainty: the personality of the pedestrian is randomized and does not change during the planning episode. Such interactions could be learned from data <cit.> and incorporated into our method.
The objective is reaching a destination p_ego^des (H(x(T))=p_ego(T)-p_ego^des) while minimizing control effort ℓ(x,u)=u^⊤ Ru and
maintaining a minimum separation distance d_sep
_α(sup_t∈[0,T]
d_sep-p_ego(t)-p_ped(t))≤ 0.
This constraint is reformulated as described in Example <ref>. The resulting problem combines sources of aleatoric (from the Wiener process disturbances) and epistemic uncertainty (from the pedestrian's uncertain behavior and initial pose).
We discretize the problem with S=20 nodes
and report results in Figure <ref> and Table <ref>.
Reducing the risk parameter α yields safer trajectories that maintain a larger distance with the pedestrian at the expense of greater control effort. For all values of α, the constraints and corresponding joint chance constraints are satisfied. In contrast, a baseline that does not consider uncertainty is unable to maintain a safe separation distance with the pedestrian at all times. Computation times, reported in the appendix for conciseness, again demonstrate the real-time capabilities of the approach.
1.1
3) Legged navigation over uncertain terrain. Finally, we consider a legged robot whose goal is jumping as far as possible over uncertain terrain while limiting its risk of slippage.
We consider the hopper robot in <cit.> with x=(q,q̇) where q=(p_x, p_z, θ, r)∈^4, u=(τ,f)∈^2, and dynamics
Mq̈(t)+C=B(q(t))u(t)+J_c(q(t))^⊤λ(t),
x(0)=x^0,
where λ=(λ_x,λ_z)∈^2 are contact forces that must satisfy
λ_x(t)=
λ_z(t)=0 ∀ t∈𝒯_flight,
J_c,x(q(t))^⊤q̇(t)
=0,
λ_z(t)≥ 0 ∀ t∈𝒯_contact,
and _α(sup_t∈𝒯_contactλ_x(t) - μ(q(t),ω)λ_z(t)
)≤ 0.
These constraints encode the absence of contact force in the flight phase (<ref>), the no-slip and positive normal force conditions (<ref>), and a constraint on the risk of slippage (<ref>), over a pre-defined contact schedule 𝒯_flight,𝒯_contact⊂[0,T].
Characterizing terrain adhesion properties (i.e., terramechanics) is challenging and
is often done via data-driven approaches. For instance, <cit.> fit Gaussian processes to observed data for slip prediction. Thus, we model the soil properties at the contact point p_foot,x(q) given by the system's kinematics with the Random Fourier Features <cit.>
μ(q,ω)=μ̅+ ∑_n=1^30ω_n,1cos(ω_n,2· p_foot,x(q)+ω_n,3),
R0.38
0.99
< g r a p h i c s >
Samples from μ.
for some randomized parameters ω following a uniform distribution, see Figure <ref>.
The coefficient μ encodes epistemic uncertainty varying over space, which is more realistic than assuming a constant friction coefficient as in <cit.>. As discussed in Section <ref>, this type of uncertainty is challenging to deal with.
The objective consists of jumping as far as possible (i.e., φ(x(T))=-p_x(T)) while minimizing the control effort ℓ(x,u)=u_2^2 and limiting the risk of slippage (<ref>).
The resulting problem clearly takes the form of with no dynamics uncertainty (i.e., σ=0). Indeed, an important source of uncertainty in legged locomotion comes from uncertain terrain properties due to imperfect perception or inherent uncertainty in the problem.
We are not aware of prior work in trajectory optimization tackling this formulation.
We solve the sampled problem (S=M=30) with <cit.>, comparing with the solution from a deterministic baseline that only considers the mean terrain parameter μ(x,ω)=μ̅. We report results in Figure <ref>.
By reducing the risk parameter α, the trajectory optimizer is more cautious to avoid slip and returns shorter jumps,
with slippage probability bounded by α.
In contrast, the baseline violates constraints 50% of the time.
§ CONCLUSION
We proposed an efficient method for risk-averse trajectory optimization.
This algorithm hinges on a continuous-time formulation with average-value-at-risk constraints.
By approximating the problem using samples,
we obtain a smooth, sparse program that allows for efficient numerical resolution.
We demonstrated the speed and reliability of the method on problems with sources of epistemic and aleatoric uncertainty that are challenging to tackle with existing approaches.
Due to its generality in handling uncertain nonlinear dynamics and constraints, this work
opens exciting future research directions.
In particular, the approach could be interfaced with learned models:
obstacles could be implicitly represented as deep SDFs <cit.> or neural radiance fields <cit.>,
deep trajectory forecasting models <cit.>
could be used for planning in autonomous driving,
learned terramechanics models <cit.> could allow more robust legged locomotion,
and dynamics could learned online,
either to improve control performance <cit.> or for active data-gathering <cit.>
under risk constraints.
plainnat
§.§ Proofs of Theorem <ref> and of Lemma <ref>
Thanks to 1-4, the solution to is well-defined <cit.> and all functions of interest are measurable (see <cit.>. In particular, the map ω∈Ω↦ t+α^-1max(sup_s∈[0,T]G(x_u(s,ω),ξ(ω))-t,0)∈ is measurable for all (u,t)∈× since z↦max(z, 0) is continuous).
Further, using Kolmogorov's Lemma <cit.>, one can show that up to a modification, the map from controls u∈ to state trajectories x_u is Hölder continuous (for the uniform norm on the space of continuous functions C([0,T],^n)) with probability one, see <cit.>.
Thanks to 2 and since the max is 1-Lipschitz continuous, the functions G and H are Lipschitz continuous. Thus, the maps (u,t)↦ t+α^-1max(sup_s∈[0,T]G(x_u(s),ξ)-t,0) and u↦ H(x_u(T)) are Hölder continuous with probability one.
Thanks to 2,
the optimal risk parameter t^⋆ is achieved in a bounded interval 𝒯⊂, see
<cit.>. Thus, thanks to 3, is a stochastic program over a compact subset of ^z+1.
The conclusion follows from <cit.>.
§ EXPECTATIONS CONCENTRATION INEQUALITIES
The proof of Lemma <ref> relies on a concentration inequality from <cit.>. Let (Ω,,) be a probability space,
d∈,
⊂^d be a compact set,
h:×Ω→ be Carathéodory (i.e., h(u,·) is -measurable for all u∈ and h(·,ω) is continuous -almost-surely), and consider the following two assumptions.
B1(B1) -almost-surely, the map u↦ h(u,ω) is α̃-Hölder continuous for some exponent α̃∈(0,1] and Hölder constant M(ω) satisfying [M(·)^2]<∞, such that
|h(u_1,ω)-h(u_2,ω)|≤ M(ω)u_1-u_2_2^α̃ ∀ u_1,u_2∈.
B2(B2) For some h̅<∞, -almost-surely, sup_u∈ |h(u,ω)|≤h̅.
Let {ω^i}_i=1^M be M∈ independent and identically distributed (iid) samples of ω.
We have the following result.
Assume that h satisfies 1 and 2. Let D=2sup_u∈u_2,
C=256,
u_0∈,
Σ_0 denote the covariance matrix of h(u_0,·), and
C̃=(
CD^α̃+1/2d^1/2[M^2]^1/2α̃^-1/2
+
Trace(Σ_0)^1/2).
Let ϵ>0, β∈(0,1), and the sample size M∈ be such than M≥ϵ^-2(C̃+h̅(2log(1/β))^1/2)^2.
Then, with -probability at least 1-β over the M iid samples ω^i,
sup_u∈|1/M∑_i=1^Mh(u,ω^i)-[h(u,·)]|
≤ϵ.
Lemma <ref> provides a high-probability finite-sample error bound for the sample average approximation of the expected value of h(u,·) that holds uniformly over all u∈.
To prove Lemma <ref>, we define the map Z:×Ω→ by
Z_u(ω)=sup_s∈[0,T]
G(x_u(s,ω),ξ(ω)),
which is bounded and Hölder continuous under assumptions 1-4. Then, given a large-enough compact set 𝒯⊂, we define the measurable map g:(𝒯×)×Ω↦,
g((t,u),ω)↦ t+α^-1max(
Z_u(ω)
-t, 0).
Since g is a composition of Hölder continuous functions
and 𝒯 is bounded, g is also bounded and Hölder continuous, i.e., g satisfies assumptions 1 and 2 over (t,u)∈𝒯×𝒰.
Given M iid samples ω^i∈Ω, formulate the sample average approximation _M() of .
Define the map g as in (<ref>).
Let ϵ>0 and β∈(0,1).
Then, under assumptions 1-4, assuming M≥ϵ^-2(C̃+h̅(2log(1/β))^1/2)^2 for (C̃,h̅) large-enough,
sup_(t,u)∈𝒯×|1/M∑_i=1^Mg((t,u),ω^i)-[g((t,u),·)]|
≤ϵ
with -probability at least (1-β).
Under assumptions 1-4, g satisfies Assumptions 1-2. The conclusion follows from Lemma <ref>.
The desired result (Lemma <ref>) follows
from Corollary <ref>.
Proof of Lemma <ref>:
For any u∈𝒰, define the random variable Z_u(ω)=(<ref>), so that
_α(sup_s∈[0,T]
G(x_u(ω̅)(s),ξ)
)
=
_α(Z_u(ω̅)).
Then, given an interval 𝒯 large-enough containing the optimal risk parameter t^⋆ of
(see the proof of Theorem 1 and <cit.>), define the function g:(𝒯×)×Ω↦ as in (<ref>). Denote the solution to _M(ω̅) by (t(ω̅),u(ω̅)) and
g((t(ω̅),u(ω̅)),ω)
=
t(ω̅)+α^-1max(Z_u(ω̅)(ω)-t(ω̅),0).
Since (t(),u()) solves _M(ω̅),
1/M∑_i=1^Mg((t(ω̅),u(ω̅)),ω^i)
=
t(ω̅)+1/α M∑_i=1^Mmax(Z_u(ω̅)(ω^i)-t(ω̅),0) ≤ 0.
By Corollary (<ref>), with probability at least 1-β,
sup_(t,u)∈𝒯×|1/M∑_i=1^Mg((t,u),ω^i)-[g((t,u),·)]|
≤ϵ.
From the last two inequalities[Assuming t()∈𝒯, which can be enforced by restricting the search of solutions to _M() to the compact set ×𝒯 with 𝒯 arbitrarily large.], with probability at least 1-β,
[g((t(ω̅),u(ω̅)),·)]
≤ϵ.
We obtain that
_α(Z_u(ω̅))
=
inf_t∈(
t+α^-1[max(Z_u(ω̅)-t,0)]
)
≤
t(ω̅)+α^-1[max(Z_u(ω̅)-t(ω̅),0)].
=
[t(ω̅)+α^-1max(Z_u(ω̅)-t(ω̅),0)].
=
[g((t(ω̅),u(ω̅)),·)]
≤ϵ
with probability at least 1-β, concluding the proof.
▪
§.§ Smooth reformulation (<ref>) of constraint in
is a deterministic non-convex program. Due to the max operation, its inequality constraint (<ref>) from the constraint is not smooth even if each term in G is smooth Many optimization tools use gradient information and assume that constraints are twice differentiable, which is not the case with (<ref>).
This constraint can be reformulated by introducing M variables y_i∈ and reformulating (<ref>) as follows:
t+
1/α M∑_i=1^M
max(
max_k=0,…,S
G(x_u^i(kΔ t),ξ^i)-t,
0)
≤ 0
(Mα)t+∑_i=1^My_i
≤ 0
y_i ≥ 0
∀ i=1,…,M
y_i ≥max_k=0,…,S
G(x_u^i(kΔ t),ξ^i)-t
∀ i=1,…,M
(Mα)t+∑_i=1^My_i
≤ 0
y_i ≥ 0
∀ i=1,…,M
y_i ≥max_k=0,…,S^j=1,…,N
G_j(x_u^i(kΔ t),ξ^i)-t
∀ i=1,…,M
(Mα)t+∑_i=1^My_i
≤ 0
0≤ y_i
∀ i=1,…,M
G_j(x_u^i(kΔ t),ξ^i)-t
≤ y_i
∀ i=1,…,M
∀ j=1,…,N
∀ k=0,…,S.
(<ref>)
In contrast to (<ref>), the set of constraints (<ref>) is smooth if every G_j is smooth. With (<ref>) and the additional variable y=(y_1,…,y_M), we reformulate as follows:
_M()
inf_u∈, t∈, y∈^M (<ref>)(u) such that (<ref>), (<ref>), (<ref>), (<ref>).
can be solved using off-the-shelf optimization tools.
§.§ Additional implementation details and results
We open-source code to reproduce the experiments in Section <ref> at 0.965<https://github.com/StanfordASL/RiskAverseTrajOpt>.
Our implementation is in Python and uses JAX <cit.> to evaluate constraints gradients.
To solve , for the drone planning and autonomous driving problems, we use a standard sequential convex programming (SCP) scheme. We use <cit.> to solve the convexified problems at each SCP iteration. For the hopper system, we solve using <cit.> due to its improved numerical stability compared to SCP.
§.§.§ Drone planning problem
The computation time generally depends on the choice of solver, desired accuracy, and use case. For example, using our proposed approach within a risk-aware MPC controller would allow warm-starting for faster convergence, see also <cit.>.
In this work, we use u̅_s=0 as a zero initial guess, disable the obstacle avoidance risk constraint for the first two SCP iterations to obtain more robust convergence, set the error tolerance threshold for to 10^-3, and use the normalized L^2-error over control iterates u^k-u^k-1/u^k to detect the convergence of SCP.
We report computation times and accuracy statistics in Figure <ref>,
evaluated on a laptop with an i7-10710U CPU (1.10 GHz) and 16 GB of RAM. We observe that 10 SCP iterations are sufficient to obtain accurate results. Albeit our implementation is written in Python, computation times are reasonable and amenable to real-time applications. By warm-starting within an MPC loop, one could achieve a replanning frequency of 30 Hz using M=30 samples and by returning the solution after a single SCP iteration.
Finally, computation time scales roughly linearly in the number of samples. Parallelization (e.g., on a GPU) would enable using a larger numbers of samples M while retaining speed, albeit our results show that a small number of samples is sufficient to obtain feasible solutions.
§.§.§ Autonomous driving problem
Computation times and SCP statistics are reported in Figure <ref>.
Results are comparable to those for the drone planning problem, again demonstrating the real-time capabilities of the approach. Computation times do not appear to be sensitive to the risk parameter α.
§.§.§ Hopper robot problem
We consider the hopper robot in <cit.> with x=(q,q̇) where q=(p_x, p_z, θ, r)∈^4, u=(τ,f)∈^2. The system follows deterministic dynamics
Mq̈(t)+C=B(q(t))u(t)+J_c(q(t))^⊤λ(t),
x(0)=x^0,
(<ref>)
where M∈^4× 4 is the inertia matrix, C∈^4 comprises of Coriolis and conservative forces, B(q(t))∈^4× 2 is the control matrix, J_c(q(t))∈^4× 2 is the contact Jacobian, and
λ=(λ_x,λ_z)∈^2 are contact forces that must satisfy (<ref>).
Monte-Carlo validation of the satisfaction of the risk constraint for the trajectories represented in Figure <ref> (obtained with a sample size M=30) are provided in
Table <ref>.
1
|
http://arxiv.org/abs/2307.02024v2
|
20230705052008
|
Ageing and Quenching through the ageing diagram II: physical characterization of galaxies
|
[
"Pablo Corcho-Caballero",
"Yago Ascasibar",
"Luca Cortese",
"Sebastián F. Sánchez",
"Ángel López-Sánchez",
"Amelia Fraser-McKelvie",
"Tayyaba Zafar"
] |
astro-ph.GA
|
[
"astro-ph.GA"
] |
firstpage–lastpage
Topological classes of thermodynamics of the four-dimensional static accelerating black holes
Di Wu
August 1, 2023
=============================================================================================
The connection between quenching mechanisms, which rapidly turn star-forming systems into quiescent, and the properties of the galaxy population remains difficult to discern. In this work we investigate the physical properties of MaNGA and SAMI galaxies at different stages of their star formation history. Specifically, we compare galaxies with signatures of recent quenching (Quenched) – in absorption and low – with the rest of the low star-forming and active population (Retired and Ageing, respectively). The analysis is performed in terms of characteristics such as the total stellar mass, half-light radius, velocity-to-dispersion ratio, metallicity, and environment. We find that the Ageing population comprises a heterogeneous mixture of galaxies, preferentially late-type systems, with diverse physical properties. Retired galaxies, formerly Ageing or Quenched systems, are dominated by early-type high-mass galaxies found both at low and dense environments. Most importantly, we find that recently quenched galaxies are consistent with a population of compact low-mass satellite systems, with higher metallicities than their Ageing analogues. We argue that this is compatible with being quenched after undergoing a star-burst phase induced by environmental processes (e.g. ram pressure). However, we also detect a non-negligible fraction of field central galaxies likely quenched by internal processes. This study highlights that, in order to constrain the mechanisms driving galaxy evolution, it is crucial to distinguish between old (Retired) and recently quenched galaxies, thus requiring at least two estimates of the specific star formation rate over different timescales.
galaxies: star formation – galaxies: evolution – galaxies:stellar content – galaxies: general
§ INTRODUCTION
The connection between the star formation history (SFH) of galaxies and their physical properties (e.g. total stellar mass, chemical composition, morphology, kinematics, environment) has been extensively studied during the last few decades <cit.>.
However, due to the complex mixture of physical processes acting upon galaxies, drawing a complete picture of galaxy evolution is still very challenging.
For a number of physical properties the statistical distribution of galaxies appears to be bimodal, e.g. light concentration <cit.> or disk vs spheroidal morphology <cit.>, UV to optical colours and absorption line features <cit.>, or internal kinematics <cit.>.
This dichotomy has also been translated into the plane formed between the total stellar mass, , and the specific star formation rate (sSFR, where ≡SFR/) <cit.>.
Galaxies are divided into those along the so-called Main Sequence of star formation (MS) <cit.>, i.e. a tight relation between the total stellar mass and the star formation rate (SFR), and a detached population of “passive” galaxies scattering below the MS with lower levels of star formation.
Nevertheless, several recent studies showed that the bimodal interpretation (in terms of several properties) might be too simplistic <cit.>.
While the fact that galaxies form a bimodal population in some parameter space could imply a common evolutionary path, it is also possible that this dichotomy is driven by the lack of sensitivity of the observables, unable to capture the diversity of mass assembly histories.
The bimodal interpretation poses the existence of some “quenching” processes able to terminate star formation in a galaxy in multiple ways, converting star-forming (blue) galaxies into passive (red) ones.
During the last years, a large number of physical processes have been proposed as potential agents that shut down star formation.
On the one hand, galaxies might quench due to environmentally-driven processes such as ram pressure stripping <cit.>, strangulation/starvation <cit.> or galaxy interactions <cit.>, see <cit.>, for a review.
On the other hand, internally-triggered quenching mechanisms such as negative feedback from Active Galactic Nuclei (AGN), supernovae driven winds <cit.> or the stabilization of the gas against fragmentation <cit.>, might also cause the demise of star formation in galaxies.
It is important to remark that, although most of these processes are thought to bring star formation to a halt, they can also produce the opposite effect; to enhance star formation efficiency and lead to the depletion of the gas reservoir in very short timescales <cit.>.
Moreover, the term “quenching” is loosely defined, and different meanings can be found in the literature.
In this work, we will refer to as quenching as a process able to terminate[or significantly suppress, provided a galaxy never completely shuts off its star formation <cit.>.] the star formation processes of a galaxy in a short timescale compared to the age of the Universe (i.e., ≲ 1 Gyr).
In contrast, we use the term ageing to denote the continuous evolution of a galaxy driven by the consumption of the gas reservoir – encompassing different evolutionary stages from starburst to quiescent phases – through uninterrupted star formation.
The ageing scenario implies that all galaxies eventually become dominated by old stellar populations, turning red, without the need of a particular event that truncates star formation <cit.>.
Unfortunately, discriminating between ageing galaxies and systems whose quenching timescales are larger than ∼ 1 Gyr, often referred to as “slow quenching” <cit.>, is in practice extremely challenging, if not impossible, and therefore we will focus on galaxies that show evidence of recent “fast quenching”.
There are numerous examples in the literature of attempts to identify quenching in the Universe.
Some previous studies employed a combination of UV to IR photometric measurements to provide a division in terms of star-forming and quenched galaxies <cit.>.
We would like to argue that this method does not offer a clean separation between “star forming” and “quenched galaxies” unless one assumes that all red systems were in fact quenched <cit.>.
Closer to our approach, there have recently been several attempts to characterise the time derivative of the SFH in nearby galaxies <cit.>.
These works built large sets of mock SFHs to derive synthetic observables, such as broad-band colours, and/or spectral features like , EW(Hδ), and .
Then, multi-dimensional polynomials and different regression techniques were employed to infer the average star formation rate over different timescales.
Building upon several observational samples of nearby galaxies (z≲ 0.1), we developed in previous works a complementary approach: instead of deriving quantitative estimates of recent changes in the SFR, we proposed an empirical diagnostic diagram to discriminate between fast and slow evolution <cit.>.
The Ageing Diagram (AD) combines two proxies for star formation, sensitive to different timescales, to probe the derivative of the recent SFH of galaxies during the last ∼1-3 Gyr.
We use the raw – including both absorption and emission components – to trace the recent star formation during the last ∼10^6-10^7 yr, while we employ optical colours or to trace the sSFR over the last ∼10^9 yr.
Systems whose SFR varies smoothly over time will arrange along a sequence given by a tight correlation between both proxies.
In contrast, galaxies that experienced recent quenching episodes will feature significantly smaller values of , due to the dearth of O and B stars, while still displaying a stellar continuum dominated by intermediate stellar populations (roughly corresponding to A-type stars).
This method is also similar to the classical approach adopted for selecting post-starburst galaxies <cit.>, systems with prominent absorption Balmer lines (EW(Hδ)< 5, strongly correlated with ) and mild to null nebular emission.
Therefore, under our prescription, PSBGs would roughly correspond to the blue end of the quenched sequence.
As noted in , our qualitative classification is complementary to the numerical estimation of the time derivative of the (NUV-i) colour <cit.> or SFR <cit.>, or the ratio between the SFR averaged between different timescales <cit.>).
In , we performed a statistical analysis of SDSS fibre spectra and showed that the vast majority of galaxies lie across the ageing sequence – consistent with smoothly declining SFHs spanning a wide range of sSFR values.
The spatially resolved distribution of CALIFA galaxies across the AD was studied in , where we found that only a handful of low-mass early-type galaxies were fully quenched but the reduced number of systems in this range precluded from any robust conclusion.
Finally, in we studied the distribution of CALIFA, MaNGA, and IllustrisTNG galaxies across the AD using their integrated spectra within one effective radius, and its connection with their star formation histories and evolutionary timescales.
We proposed two demarcation lines used to classify galaxies into four AD domains: Ageing galaxies (AGs); systems undergoing secular evolution, Undetermined galaxies (UGs); an intermediate class populated with galaxies with unclear classification, Quenched galaxies (QGs); systems that experienced some quenching event during the last ∼ 1 Gyr, and Retired galaxies (RGs); located at the red end where ageing and quenched sequences converge, whose recent past becomes extremely challenging to infer.
Following the findings of , in the present work we aim to provide a physical characterization of the different galaxy populations selected by means of the AD, and explore the connection with their evolutionary status.
In Section <ref> we describe the and galaxy samples under study and the derived quantities that are used to describe their evolutionary status.
Section <ref> provides a comparison of the AD classification with the distribution of galaxies across the -plane.
In Section <ref> we show the overall distribution of each AD population in terms of all the properties under study.
Section <ref> presents the analysis of the trends found for each property as a function of stellar mass and environment, including the relative fraction of each AD population.
In Sections <ref> and <ref> we discuss the possible dominant quenching mechanisms and the limitations of our analysis, respectively.
Finally, we provide a summary of the main findings in Section <ref>.
Throughout this work, we adopt a ΛCMD cosmology, with H_0=70 km s^-1 Mpc^-1 and Ω_m = 0.3.
§ METHODS
§.§ Galaxy samples
§.§.§ MaNGA
The Mapping Nearby Galaxies at Apache Point Observatory survey <cit.> is one of the fourth-generation Sloan Digital Sky Survey (SDSS) core programs, which was able to measure spatially resolved spectroscopy of ∼10000 galaxies.
The instrument for carrying out the survey employs 17 fibre-bundle integral field units (IFU) that vary in diameter from 12” to 32” (19 to 127 fibers per IFU) with a wavelength coverage over 3600-10300 Å at R∼2000 <cit.>.
The first principle that motivated the sample selection consists of getting a large enough number of galaxies to fill six bins of stellar mass, SFR and environment (216 bins) with 50 objects respectively <cit.>.
Additionally, galaxies are selected following a approximately flat distribution in terms of stellar mass in the range 9 < log_10(M_*/M_⊙)< 12 (based on the K-corrected i-band).
In terms of the spatial extent, approximately two thirds of the total sample were observed up to 1.5 (Primary sample) while ∼ one third is covered up to 2.5 (Secondary sample).
In addition, a third sample was selected to overpopulate the number of green valley systems; currently between the star-forming and passive populations, comprising ∼10% of the total sample.
This considerations translate into a sample of galaxies that ranges in redshift between 0.01 ≲ z ≲ 0.15.
In this work we are using the complete MaNGA release, as part of the 17th SDSS data release <cit.>, that comprises more than 10.000 galaxies.
We end up with a total sample of ∼8.600 galaxies after selecting those with reliable estimates (see Section <ref>).
We will use the publicly available volume weights provided in <cit.>, computed on an “a posteriori” basis as described in <cit.>, in order to perform the statistical analysis using a volume-corrected sample in Section <ref>.
§.§.§ SAMI
The Sydney-Australian Astronomical Observatory Multi-object Integral field spectrograph (SAMI) galaxy survey <cit.> was carried out between 2013 to 2018 at the 3.9-m Anglo Australian Telescope at Siding Spring Observatory.
The SAMI instrument has 13 optical fibre bundles, covering a total of ∼ 1 degree across the FoV.
Each bundle comprises 61 optical fibres (1.6 in diameter) that cover a circular FoV of 15,
feeding the AAOmega spectrograph <cit.>.
The data is stored in the form of two datacubes corresponding to each of the spectrograph arms, red and blue, with a wavelength coverage of 3700-5700 (R∼1800) and 6250-7350 (R∼4300), respectively.
Roughly, two thirds of the total sample (∼3000 galaxies) were selected from the GAMA equatorial fields (G09, G12 and G15), part of the Galaxy and Mass Assembly survey <cit.>.
Target GAMA galaxies where selected based on cuts of stellar mass (8 ≤≤ 11) as function of redshift (0.004≤ z ≤ 0.12), with the Primary and Secondary (including lower stellar masses at higher redshifts) samples restricted to z<0.095 and z<0.12, respectively.
The rest of the sample was selected from eight nearby clusters: a Primary sample with R<R_200 and > 9.5 or > 10 for z_ clus < 0.045 and z_ clus > 0.045, respectively; and a Secondary sample <cit.>.
In this work we use the third and final data release <cit.>, comprising 2100 and 888 galaxies from the GAMA and cluster regions, respectively.
§.§ AD classification
The proxies used for computing the Ageing Diagram – , –, were measured following the same scheme outlined in <cit.>.
The Balmer Break index at 4000 is defined as
D_n(4000) ≡⟨ F_λ⟩_4050-4250 Å/⟨ F_λ⟩_3850-3950 Å,
where ⟨ F_λ⟩ denotes the average flux density, computed within the wavelength ranges 3850-3950 Å, and 4050-4250 Å, respectively.
The raw equivalent width, including both absorption and emission components, is defined as
EW(Hα)≡∫_ 6550 ^ 6575 F_λ(λ) /F_ Bλ_ R-F_ Rλ_ B/λ_ R-λ_ B+
λF_ R-F_ B/λ_ R-λ_ B-1 λ,
where F_ B and F_ R correspond to the mean flux per unit wavelength computed in the 6470-6530 Å and 6600-6660 Å bands, with central wavelengths λ_ B=6500 Å and λ_ R=6630 Å, respectively.
Under this definition, positive and negative values of EW denote emission and absorption, respectively.
To derive these quantities we use integrated elliptical aperture spectra, based on photometric Sérsic fits, restricted to 1 along the semi-major axis.
For galaxies we use the estimates provided by the collaboration <cit.>.
Likewise, we use the ellipticity, and effective radius reported values from the NSA catalog[https://www.sdss4.org/dr17/manga/manga-target-selection/nsa/NASA-Sloan Atlas catalogue] <cit.> to compute the integrated spectra of galaxies.
Finally, in order to classify each galaxy in terms of the four AD domains, we employ the ageing and quenched demarcation lines presented in :
Ageing: / = 250.0 · 10^-1.2 · - 4.3,
Quenched: / = -12.0 · 10^-0.5 · + 1.8,
According to these expressions, galaxies located above the two lines are classified as Ageing.
Objects placed below eq. (<ref>) and above eq. (<ref>) are denoted as Undetermined.
Retired systems are placed below and above the ageing and quenched lines, respectively.
Finally, Quenched galaxies correspond to the systems located below both demarcation lines.
§.§ Physical quantities
The properties that will be used to characterise the galaxy populations derived from the AD are: the Balmber Break index, ; the raw equivalent width ; total stellar mass ; the Petrosian radius containing the 50% and 90% of light in the r-band, respectively (, ); integrated stellar metallicity within 1, ; the ratio between the velocity and velocity dispersion within 1, ; fifth-nearest-neighbour surface density, , and the classification between central and satellite systems.
They comprise a set of proxies tightly connected to the current evolutionary state of each galaxy.
Below we provide a description of the source and methodology used to derive each quantity for both samples.
§.§.§ Stellar properties
The total stellar mass, , for galaxies was selected from the NSA catalog <cit.>, estimated using multiwavelength photometry from UV to IR <cit.>.
For galaxies, the stellar masses were estimated using an empirical relation between optical colours and stellar mass <cit.>.
We use the r band Petrosian radius containing the 50 and 90 percent of total light, respectively, to characterize the spatial extent of galaxies and define the concentration index C=R_90/R_50 used as a proxy for morphological class.
For galaxies we use the estimates provided in the NSA catalog.
For GAMA galaxies we employ the values provided by the GAMA collaboration[ http://www.gama-survey.org/dr4/schema/table.php?id=684DR4 GAMA Science Catalogue] <cit.>.
Regarding Cluster galaxies, we only use the values of provided in <cit.>, as there are no publicly available estimates of 90.
Stellar metallicities, , and kinematic quantities, i.e. , were computed by means of <cit.> for galaxies, as described in <cit.>.
For , we use <cit.> estimates of <cit.>, measured from luminosity-weighted aperture spectra extracted from 1 circular apertures where possible, otherwise they were measured from the closest possible aperture in radius.
We use the stellar metallicities reported in <cit.>,
estimated within one circular aperture by matching a set of observed Lick indices to various SSP models <cit.>.
§.§.§ Environment
We compute the projected density to the 5th nearest neighbour, , for galaxies using the estimates provided by the Galaxy Environment for MaNGA Value Added Catalogue <cit.>.
After defining a primary volume-limited sample of galaxies, the projected distance to the fifth nearest neighbour, d_5, was estimated using a circular aperture of 5 Mpc and a line-of-sight velocity difference of 500 km/s.
Since not all galaxies have a detected fifth neighbour within the adopted range (roughly ∼30 percent of the sample), some of the distances correspond to lower neighbour ranks (i.e. d_5 = d_n, with n<5), and their projected densities will correspond to:
= n/π d_n^2
In addition, galaxies flagged as isolated, with no density estimates, are assigned the lowest value of the sample.
For galaxies, we use estimates of taken from <cit.>.
They were computed both for GAMA and Cluster regions by specifying a density-defining population with absolute r-band magnitudes M_r< -18.6 or M_r< -19.0 (when including the secondary targets).
Then, the surface number density for each SAMI galaxy is determined by computing the comoving projected distances to the fifth nearest neighbour (without limiting the range to any given aperture).
In addition, the values of where corrected for spectroscopic incompleteness (with corrections smaller than ∼15 percent).
The separation between centrals and satellites is restricted to the GAMA sample and is based on the group catalog[Taken from http://www.gama-survey.org/dr4/data/cat/GroupFinding/v10/G3CGal (v10)] provided by <cit.>.
For cluster galaxies, we will assume that all systems can be classified as satellites with the exception of the brightest central galaxies on each cluster (BCGs).
§ RESULTS
§.§ From the AD to the M_⋆-sSFR plane
In this section we show the connection between the AD classification and the M_⋆-sSFR plane.
We restrict our analysis to the sample as we apply the volume corrections provided in <cit.> in order to compare with previous work <cit.>.
We employ two estimates of the sSFR that correspond to different time averages of the recent star formation history of the galaxy.
On the one hand, we use the sSFR derived from the intensity of the line, tracing in principle the fraction of O and B stars that ionise the ISM.
This roughly corresponds to an average of the SFR over the last ≲20-30 Myr <cit.>; sSFR().
On the other hand, we use SFR estimates derived from the stellar population synthesis using , that accounts for the average SFR over the last ∼ 30-100 Myr; sSFR( SSP).
More precisely, we use the logarithmic mean between these two values, where we observe the best match with sSFR() for AGs.
As expected, we show in Figure <ref> that both of our observational proxies present a strong correlation with the sSFR estimates.
By construction, is directly connected to sSFR(), and galaxies with > 0 show a one-to-one correspondence between both quantities.
Below this threshold, roughly equivalent to sSFR∼ 3× 10^-12 yr^-1, the estimates of sSFR() are dominated by the residual emission after subtracts the stellar continuum.
These measurements are extremely uncertain, and it is not possible to discriminate Quenched and Retired systems, or even completely dead galaxies <cit.>.
However, when using sSFR( SSP) – tightly correlated with – we find that the majority of Quenched and Retired galaxies display values above and below ∼ 10^-11 yr^-1, respectively, albeit the separation according to this threshold, often used in the literature to discriminate between “star-forming” and “passive” populations, is far from perfect: sSFR() < 3× 10^-12 yr^-1 selects both Quenched and Retired systems, whereas sSFR( SSP) < 10^-11 yr^-1 includes a mixture of Ageing and Retired objects.
In the M_⋆-sSFR plane, Ageing galaxies arrange along the MS, with mean values around 10^-10 yr^-1.
Undetermined systems are on average below the Ageing population, but they would still be classified as MS systems, while Quenched and Retired galaxies correspond to a population that extends from the so-called Green Valley (i.e. sSFR ∼ 10^-11 yr^-1) to residual levels of star formation.
However, the overall bi-dimensional distribution across the plane is strongly dependent on the SFR tracer and the corresponding timescale <cit.>.
In our case, sSFR() is obviously more appropriate than sSFR( SSP) to identify QGs, but imposing a threshold based on only one timescale (even if the adopted threshold depends on M_⋆, parallel to the MS) is not guaranteed to identify galaxies that suffered some quenching event, as it will include a fraction of Retired galaxies that may also form through the Ageing channel.
Moreover, we would also like to note that measurement errors, as well as physical fluctuations in the star formation rate <cit.>, may play a role on the observed sSFR on short timescales.
§.§ Physical properties in terms of AD class
The main purpose of this work is to provide a physical characterization of the galaxy populations classified by the Ageing Diagram (AD), following the scheme presented in <cit.>.
We have selected a range of physical properties that are known to be tightly connected with the evolutionary status of galaxies (see <ref>).
In order to provide a summarized view, we show in Figures <ref> and <ref> the distribution of each pair of quantities for and galaxy samples, respectively.
Both samples are kept separated during the rest of the paper.
This is done to prevent potential biases driven by their distinct sample selection criteria, as well as the different methodological approaches employed to derive the physical quantities.
The total probability distribution is illustrated by the grey solid contours, while the individual distributions of Ageing, Undetermined, Quenched and Retired populations are denoted as coloured contours.
The galaxy classification is based on their location across the plane formed by the -.
This is most clearly seen on the top left panels of both figures, where we include the demarcation lines (black solid lines) used to classify the galaxies into four different domains as presented in .
As a general remark, we find that both samples provide broadly consistent distributions at all panels.
Table <ref> also includes the median and dispersion values (16 and 84 percentiles) for each property, in terms of Ageing, Undetermined, Quenched and Retired populations, in both observational samples.
As shown before, the distribution as function of total stellar mass, , and or is tightly connected to the distribution of galaxies in terms of and sSFR.
The majority of previous studies focusing on the M_⋆-sSFR relation <cit.> usually divided the sample into star-forming (sSFR≳ 10^-11 yr^-1) and passive populations (sSFR≲ 10^-11 yr^-1), which could be interpreted as the division between galaxies with values above or below ∼1.6 or ∼0, in line with the bimodal paradigm where galaxies are either hosting star formation processes or not <cit.>.
Although in statistical terms this seems a reasonable approach, as the overall probability distributions of both and , integrated over all galaxy masses, show two distinct peaks, here we show that our diagram is able to go one step further in discriminating between those systems that undergone recent quenching (i.e. a sharp truncation of the SFH) and those that evolved secularly over the last ∼3Gyr <cit.>.
Under our prescription, the Ageing domain roughly corresponds to the so-called Main Sequence of Star Forming galaxies (MS), extending from low to high mass systems (≤ 10^11) whose star formation activity is consistently traced by our two observational proxies, probing different time scales.
On the other hand, as mentioned above, the Quenched population is predominantly made of galaxies in the mass range 9 ≲≲ 10.5, forming stars according to but featuring negligible sSFR in terms of .
Both populations, clearly separated in the AD, converge to the Retired domain towards the high mass end, thus implying that not all red sequence systems were necessarily quenched in the past.
The Retired domain – mainly composed of massive galaxies with ≳ 10 – harbours a combination of formerly quenched systems and formerly Ageing systems that have gradually consumed most of their gas reservoir.
Regarding the spatial extent of stars in galaxies, we find that Quenched systems have systematically smaller – roughly between 1 to 3 kpc – than the rest of the populations, that span over a wide range of values from 1 to 10 kpc.
On the other hand, the plane defined by the concentration index and presents a “V” shape.
Ageing and Retired galaxies roughly correspond to the edges (illustrating the classical distinction between disk and spheroidal morphologies), while QGs clump around the vertex (small and compact objects).
In terms of their kinematic morphology, Quenched galaxies are also found to display intermediate values of compared to the Ageing (≳ 0.3) and Retired populations (≲ 0.3).
In the sample, QGs present closer values of to the Ageing pop. (i.e. more rotationally supported), while the opposite is true for galaxies.
The average metallicity of the stellar population, , also appears to be connected with the distribution of galaxies along the AD.
For both and samples we find that the Ageing domain hosts more chemically primitive systems, while Retired galaxies present higher metallicities close to solar values <cit.>.
Systems classified as Quenched arrange above Ageing galaxies in the vs plane, corresponding to a (low-mass) chemically evolved population.
Finally, we find that the environment, characterized by , also yields a strong correlation with the AD classification scheme.
Although both observational samples target different environments, they show a consistent result: Quenched galaxies are primarily found in denser environments than the Ageing or Retired populations.
§.§ Physical trends and fractions
In this section we explore in detail the relation between stellar mass, interpreted as the fundamental tracer of the evolutionary state of a galaxy, with the rest of physical properties under study in this work: , 50, , , .
In addition, we further study the impact of environment by binning and samples into centrals and satellites (see <ref> for details).
Figures <ref> to <ref> illustrate the relative fraction of galaxies classified within each AD domain as function of each pair of properties.
For illustrative purposes, since the contribution of Undetermined galaxies is negligible (<1 percent), we will only consider the Ageing, Quenched, and Retired populations in this analysis.
Complementarily to the domain fraction, the trends with total stellar mass along each diagram are estimated using the running percentiles (16, 50, 84) illustrated by black dashed lines.
Finally, we include contours that comprise 50 and 90 percent of the sample at every panel (black solid lines), tracing the bulk of the distribution (that can also be used to asses the confidence limits of the running percentiles).
§.§.§ and
As mentioned in the previous section, the connection between stellar mass and roughly corresponds to the M_⋆-sSFR relation.
Figure <ref> shows a good agreement between and samples, although their mass distributions are slightly different.
Ageing galaxies distribute along a wide range of masses and values of , showing increasing values of (lower sSFR) as mass increases <cit.>.
The population is dominated by, but not limited to, low-mass galaxies reflecting the so-called downsizing effect.
By construction, the fraction of Retired galaxies below ∼1.6 is null, and most Quenched galaxies cluster around that value.
On the other hand, we also note that the vast majority of Quenched galaxies correspond to Satellite systems, albeit a non-negligible fraction of this population (37 and 14 per cent for and , respectively) have been classified as Central objects.
It is interesting to note that most Quenched satellite galaxies are typically below ∼ 10^10 , whereas centrals span the whole mass range of the sample, even beyond 10^11 .
§.§.§ Morphology
The spatial distribution of stars within galaxies, traced by their characteristic radius (i.e. 50) and light concentration (i.e. ) is another fundamental aspect connected to their assembly history.
We show in Figure <ref> the plane defined by the total stellar mass and effective radius, often referred to as the mass-size relation.
There is a sharp transition between the fraction of Ageing systems in favour of the Retired population (note the abrupt change in the colour maps), roughly corresponding to a stellar surface density threshold Σ_ Q≡M_⋆/2/π^2∼ 200-300 pc^-2.
While Retired galaxies arrange along a very tight relation, ∝ M_⋆^0.5, roughly coincident with such threshold, Ageing systems present a much larger scatter, illustrated by the running percentiles, consistent with a rich diversity of disk-like morphologies <cit.>.
As mentioned before, the majority of Quenched systems correspond to compact, low-mass, satellite galaxies.
Here we see that their distribution in the mass-size plane is located at an intermediate region between the Ageing and Retired populations; i.e. 9 < < 10 and 0 < < 0.5.
Central QGs, on the other hand, seem to be located close to the characteristic stellar mass surface density Σ_ Q over the whole mass range (see running-median values traced by the dashed lines).
When exploring the concentration of light , plotted in Figure <ref>, we find an almost flat trend with for Ageing galaxies, with the median roughly located at ≃ 2.5 and ≃ 2.2 for and , respectively.
On the other hand, the concentration of Retired systems shows a mild positive correlation with stellar mass, with the bulk of the population presenting values ≳ 3.
In between, Quenched galaxies scatter roughly around ∼ 3 and seem to be well described by the Retired mass-concentration relation both for central and satellite galaxies, i.e. a mild positive trend between and .
Note that the value of 90 is not available for cluster galaxies, and therefore the number of galaxies in this plot (especially on the SAMI panels) is significantly lower than in the other figures.
Regarding the ratio between ordered to random motions within galaxies, traced by and shown in Figure <ref>,
Ageing systems seem to display an overall correlation, most evident in , in the sense that more massive galaxies tend to be less rotationally supported.
However, there is significant scatter along this trend, and one can find AGs with very different values of .
In contrast, Retired galaxies are almost invariably dominated by random motions, with < 0.5.
Quenched galaxies feature a more complex behaviour.
While we do not find any significant difference between the mass-size and mass-concentration relations of central and satellite QGs (aside from the total number of galaxies within each class), here we see that central Quenched systems tend to be more pressure-supported than Ageing galaxies (see median values of each galaxy domain), displaying values of compatible with the distribution of the Retired population.
On the other hand, the dynamical state of satellite QGs appears to be closer to AGs, as they present slightly larger values of , hinting that the physical mechanism responsible for quenching may be different for central and satellite objects.
§.§.§ Chemical composition
The relation between total stellar mass and mass-weighted metallicity of the stellar population is shown in Figure <ref>.
For Ageing galaxies, both samples follow the well-known trend that more massive systems are more chemically evolved, albeit seems to feature a steeper slope than , possibly due to differences in the analysis approach as well as sample selection.
RGs, on the other hand, concentrate at high metallicities ≳ -0.3, with a much shallower slope.
From our data, it is difficult to conclude whether Quenched galaxies represent an intermediate population between AG and RG, or they follow the same tight - relation as the latter.
What becomes clear is that, for any given stellar mass, QGs tend to be more metal-rich than their Ageing counterparts, as illustrated by their respective running median values (black dashed lines), in agreement with previous results <cit.>.
Once again, we do not find any significant difference between centrals and satellites regarding these chemical evolution trends.
§.§.§ Environment
In Figure <ref>, we show the relation between and the environment density, traced by the surface density estimated from the distance to the fifth nearest neighbour, .
As expected, the majority of satellite galaxies are found in denser environments, while the opposite is true for centrals.
When comparing the distributions of each AD population, both and samples roughly cover the same region of the parameter space, as shown by the contour that contains the 90 percent of the distribution (thin solid line).
However, the bulk of the distribution for is located at denser environments at all panels, as roughly one third of the sample lives in clusters.
As illustrated by the colour maps, representing the fraction of galaxies in each class, Ageing galaxies dominate the low- mass (< 10^11 ) and density (< 1 Mpc^-2) region of the diagram.
Retired galaxies dominate at dense environments and high stellar masses.
Their fraction, plotted on the bottom row of each panel, is consistent with the results provided in Fig. 6 of <cit.>.
Here we provide a more detailed characterization by including the population of recently quenched galaxies.
As can be seen in the middle row of and panels, we find that Quenched systems dominate the low-mass and high-density region of the diagram.
In addition to this population, mainly composed of satellite galaxies, the large statistics in the survey makes it possible to uncover a non-negligible number of central QGs, spanning the whole mass range from ∼ 9 to 11, that reside in the field (< 1 Mpc^-2).
§ DISCUSSION
§.§ Quenched galaxies: nature or nurture?
While the correlations between the physical properties of QGs reported in this study, compared to those of Ageing and Retired systems, offer some insight on the mechanism(s) that are ultimately responsible for quenching star formation in a formerly Ageing galaxy, inferring the actual causal relations behind them is far from straightforward.
Nevertheless, we would like to claim that the results shown in Section <ref> suggest that quenching is an ubiquitous process in the local Universe, but rare, although the precise number of galaxies that transit through this evolutionary path remains uncertain .
In particular, we do find QGs both at high and low stellar masses, as well as galaxy density environments.
We interpret this fact as evidence for at least two different, independent quenching mechanisms, broadly consistent with the classification proposed by <cit.>.
On the one hand, we found that the majority of Quenched galaxies are low-mass systems in dense environments.
In order to bring AGs to the Quenched domain, this processes must act on relatively short timescales (≲ 1 Gyr).
Previous works, such as <cit.> suggested that satellite systems undergo a “delayed-then-rapid” quenching phase consisting of an early-infall phase (2-4 Gyr) where star formation proceeds roughly unaffected, followed by a fast truncation of the SF.
The results presented in this work are compatible with this scenario as far as the satellite population is concerned.
This suggests that environmental effects such as ram pressure stripping might be the main cause of the cessation of star formation in the local Universe.
Other environmental processes, such as strangulation, would take several Gyr, and the affected galaxies would evolve from the Ageing to the Retired domain in the AD.
As discussed in , QGs have spent, on average, more time in a “starburst” evolutionary stage (birth rate parameter b ≡ t sSFR(t) > 2) than AGs of the same colour.
Here we see that QGs are also more compact, concentrated, and metal-rich than AGs of the same mass.
All these results are consistent with a post-starburst scenario, which provides further support to the ram-pressure hypothesis.
On the other hand, we also find a population of Quenched central galaxies that tend to be found in the field (37 and 14 per cent of and QGs, respectively).
Environmental processes are thus very unlikely to explain the sudden decline of the SFR observed in these objects.
These galaxies span the whole mass range of our sample, and to some extent they are consistent with the simple “mass quenching” scenario envisaged by <cit.>, where the quenching probability per unit time is proportional to the star formation rate.
However, the mass-size relation of QGs is not trivial to justify under this hypothesis.
An alternative explanation, along the lines of “morphological quenching” <cit.>, would be that star formation ceases upon a critical mass surface density Σ_ Q is reached.
This mechanism would equally apply to centrals and satellites, whose mass-size relation would simply trace this critical value, as we observe in Figure <ref>.
One interesting question under this interpretation would be the role of environment.
More precisely, whether galaxies in groups tend to be more compact due to externally-induced processes (nurture), or they simply evolved on shorter timescales over cosmic history because they arise from a higher overdensity peak in the primordial Universe (nature).
§.§ Retired galaxies: old or dead?
In principle, there are two different evolutionary paths that would take a galaxy to the Retired domain: on the one hand, star formation could slowly decrease, without any significant event throughout its life (Ageing), or the SFR could suddenly drop due to a well-defined, discrete event (Quenching).
The AD, by itself, is not sufficiently sensitive to discriminate between both scenarios, as stellar populations older than ≳ 1-3 Gyr display nearly identical optical features.
One way to tackle this issue would be to investigate the AD at different redshifts and trace the evolution of galaxies back in time, taking advantage of current and forthcoming optical spectroscopic surveys <cit.>.
A related question, still poorly understood, is whether ageing and/or quenching processes are able to fully halt the conversion of gas into stars or, in contrast, Retired galaxies eventually reach some asymptotic regime of residual star formation <cit.>.
As show in , many state-of-the-art cosmological simulations predict that the majority of systems off the MS have sSFR values below the resolution limit (compatible with sSFR=0), while observational estimates (arguably upper limits) suggest that galaxies may retain some star formation activity.
In order to address this question, alternative tracers of star formation must be sought.
§ SUMMARY AND CONCLUSIONS
Following the classification scheme presented in <cit.>, in this work we study the physical properties of the galaxy populations found in different domains of the Ageing Diagram (AD), using and surveys.
The AD is based on the combination of two star formation proxies, sensitive to different timescales, such as the raw , tracing the on short timescales (∼ Myr), and the , that provides an estimate of the on ∼ Gyr scales.
This method allows to detect abrupt changes on the recent star formation history of galaxies and investigate quenching episodes during the last ∼ 3 Gyr.
We use the empirical lines defined in to classify galaxies as Ageing (AGs), Undetermined (UGs), Quenched (QGs) or Retired (RGs).
For each population, we explore the trends with total stellar mass, 50 per cent r-band light radius , concentration , ratio between ordered to random motions , stellar chemical composition , and environment, traced by the projected number density and group rank (satellite vs central).
The characteristic properties of AGs, RGs and QGs are summarized below:
Ageing galaxies, whose evolution is mainly driven by secular processes, extend over the whole range of stellar mass (10^8.5⩽/⩽ 10^10) and .
Most AGs are classified as centrals living in the field, but we find a relatively large number of satellites as well.
The probability distributions in terms of stellar mass and size, concentration or , respectively, are consistent with having predominantly late-type morphologies.
They display the well-known correlations between stellar mass, star formation and metallicity characteristic of MS galaxies.
Retired objects are chemically evolved, featuring high values of and , with a very mild dependence, if any, on stellar mass.
They arrange along a tight sequence in the mass-size plane, roughly given by ∝^0.5, at the compact end of the AGs; both populations are indeed well separated by a stellar surface density threshold Σ_ Q∼ 200-300 pc^-2.
RGs are consistent with being composed by early-type systems, and they are found in all kinds of environments.
The Quenched galaxy population is mainly composed by compact low-mass satellite galaxies, in good agreement with the results found in <cit.>, but we also detect a non-negligible fraction QGs extending from low- to high-mass central systems.
As shown in , this population accounts for ∼10 per cent of the total number of nearby galaxies at z≲0.1, and the characteristic timescale involved in the quenching of the SFR has a median value of ∼ 500 Myr <cit.>.
In the -plane, they distribute between Ageing and Retired objects, relatively close to Σ_ Q.
QGs feature a wide range of values of , although we see that low-mass galaxies might be more rotationally-supported, while massive QGs are dominated by random motions.
For any stellar mass, they systematically present higher metallicities than AGs.
Using different estimates of the star formation activity is of paramount importance in order to identify Quenched galaxies.
Any selection based on the -plane alone will always include a fraction of AGs and/or RGs, depending on the specific timescale of the adopted tracer.
From to our AD classification scheme, we conclude that quenching in the local Universe is an ubiquitous processes, compatible with the idea of more than one channel acting at different regimes, but more frequent in dense environments.
Given the physical trends found in this work, and the results from <cit.>, we conclude that our satellite QG population is compatible with a post-starburst scenario driven by environmental processes such as ram pressure stripping, where a burst of star formation followed by a sudden drop of SFR led to a compact and metal-rich phase in most QGs.
On the other hand, central QGs are compatible with the “mass quenching” scenario <cit.>, although the mass-size relation of central QGs points toward “morphological quenching” <cit.>.
In order to determine the rate of quenching along cosmic time and the build up of the Retired population, we suggest that a detailed analysis of the AD populations using larger, deep, and complete spectroscopic samples must be carried out.
§ ACKNOWLEDGEMENTS
PC and YA thank financial support provided by grant PID2019-107408GB-C42 of the Spanish State Research Agency (AEI/10.13039/501100011033).
A significant fraction of this work has been carried out during a long-term research visit of YA to ICRAR-UWA funded under the mobility scheme Ayudas para la Recualificación del Profesorado Universitario of the UAM.
SFS thanks the support by the PAPIIT-DGAPA IG100622 project, based on data obtained from the ESO Science Archive Facility.
LC acknowledges support from the Australian Research Council Discovery Project and Future Fellowship funding schemes (DP210100337, FT180100066).
This research made extensive use of the python packages NumPy,[https://numpy.org/] Matplotlib,[https://matplotlib.org/] <cit.> and Astropy,[http://www.astropy.org] <cit.>.
Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions.
SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the University of Utah. The SDSS website is www.sdss.org.
SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics | Harvard & Smithsonian, the Chilean Participation Group, the French Participation Group, Instituto de Astrofísica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut für Astronomie (MPIA Heidelberg), Max-Planck-Institut für Astrophysik (MPA Garching), Max-Planck-Institut für Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatário Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autónoma de México, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics | Harvard & Smithsonian (CfA), the Chilean Participation Group, the French Participation Group, Instituto de Astrofísica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut für Astronomie (MPIA Heidelberg), Max-Planck-Institut für Astrophysik (MPA Garching), Max-Planck-Institut für Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatório Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autónoma de México, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.
The SAMI Galaxy Survey is based on observations made at the Anglo-Australian Telescope. The Sydney-AAO Multi-object Integral field spectrograph (SAMI) was developed jointly by the University of Sydney and the Australian Astronomical Observatory. The SAMI input catalogue is based on data taken from the Sloan Digital Sky Survey, the GAMA Survey and the VST ATLAS Survey. The SAMI Galaxy Survey website is http://sami-survey.org/. The SAMI Galaxy Survey is supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013, the Australian Research Council Centre of Excellence for All-sky Astrophysics (CAASTRO), through project number CE110001020, and other participating institutions.
§ DATA AVAILABILITY
The data underlying this article are publicly available at https://www.sdss.org/dr17/manga/ for MaNGA and http://sami-survey.org for the
SAMI surveys, respectively.
Additional data generated by the analyses in this
work are available upon request to the corresponding author.
mnras
§ MANGA AND SAMI STATISTICS
Table <ref> includes the median and dispersion values (estimated with the 15 and 84 percentiles) derived from each property for and samples.
|
http://arxiv.org/abs/2307.00754v1
|
20230703045740
|
ImDiffusion: Imputed Diffusion Models for Multivariate Time Series Anomaly Detection
|
[
"Yuhang Chen",
"Chaoyun Zhang",
"Minghua Ma",
"Yudong Liu",
"Ruomeng Ding",
"Bowen Li",
"Shilin He",
"Saravan Rajmohan",
"Qingwei Lin",
"Dongmei Zhang"
] |
cs.LG
|
[
"cs.LG",
"cs.AI"
] |
authorsperrow=1
Microsoft^,
Microsoft 365^,
Peking University^,
Tsinghua University^
plain
Anomaly detection in multivariate time series data is of paramount importance for ensuring the efficient operation of large-scale systems across diverse domains. However, accurately detecting anomalies in such data poses significant challenges due to the need for precise modeling of complex multivariate time series data. Existing approaches, including forecasting and reconstruction-based methods, struggle to address these challenges effectively. To overcome these limitations, we propose a novel anomaly detection framework named , which combines time series imputation and diffusion models to achieve accurate and robust anomaly detection. The imputation-based approach employed by leverages the information from neighboring values in the time series, enabling precise modeling of temporal and inter-correlated dependencies, reducing uncertainty in the data, thereby enhancing the robustness of the anomaly detection process. further leverages diffusion models as time series imputers to accurately capturing complex dependencies. We leverage the step-by-step denoised outputs generated during the inference process to serve as valuable signals for anomaly prediction, resulting in improved accuracy and robustness of the detection process.
We evaluate the performance of via extensive experiments on benchmark datasets. The results demonstrate that our proposed framework significantly outperforms state-of-the-art approaches in terms of detection accuracy and timeliness. is further integrated into the real production system in Microsoft and observe a remarkable 11.4% increase in detection F1 score compared to the legacy approach. To the best of our knowledge, represents a pioneering approach that combines imputation-based techniques with time series anomaly detection, while introducing the novel use of diffusion models to the field.
: Imputed Diffusion Models for Multivariate
Time Series Anomaly Detection
Yuhang Chen^, Chaoyun Zhang^, Minghua Ma^, Yudong Liu^, Ruomeng Ding^, Bowen Li^,
Shilin He^, Saravan Rajmohan^, Qingwei Lin^ and Dongmei Zhang^
August 1, 2023
=====================================================================================================================================================
§ INTRODUCTION
The efficient operation of large-scale systems or entities heavily relies on the generation and analysis of extensive and high-dimensional time series data. These data serves as a vital source of information for continuous monitoring and ensuring the optimal functioning of these systems. However, within these systems, various abnormal events may occur, resulting in deviations from the expected downstream performance of numerous applications <cit.>. These anomalous events can encompass a broad spectrum of issues, including production faults <cit.>, delivery bottlenecks <cit.>, system defects <cit.>, or irregular heart rhythms <cit.>. When different time series dimensions are combined, they form a multivariate time series (MTS). The detection of anomalies in MTS data has emerged as a critical task across diverse domains. Industries spanning manufacturing, finance, and healthcare monitoring, have recognized the importance of anomaly detection in maintaining operational efficiency and minimizing disruptions <cit.>, and the field of MTS anomaly detection has garnered significant attention from both academia and industry <cit.>.
However, achieving accurate anomaly detection on MTS data is not straightforward, as it necessitates precise modeling of time series data <cit.>. The complexity of modern large-scale systems introduces additional challenges, as their performance is monitored by multiple sensors, generating heterogeneous time series data that encompasses multidimensional, intricate, and interrelated temporal information <cit.>. Modeling complex correlations like these requires a high level of capability from the model.
Furthermore, time series data often displays significant variability <cit.>, leading to increased levels of uncertainty. This variability can sometimes result in erroneous identification of anomalies. This adds complexity to the anomaly detection process, as the detector must effectively differentiate between stochastic anomalies and other variations in order to achieve robust detection performance <cit.>.
The aforementioned challenges have spurred the emergence of numerous self-supervised learning solutions aimed at automating anomaly detection. Recent methods can be classified into various categories <cit.>, where forecasting <cit.> and reconstruction-based <cit.> approaches have been most widely employed. The former leverages past information to predict future values in the time series and utilizes the prediction error as an indicator for anomaly detection. However, future time series values can exhibit high levels of uncertainty and variability, making them inherently challenging to accurately predict in complex real-world systems. Relying solely on forecasting-based methods may have a detrimental impact on anomaly detection performance <cit.>. On the other hand, reconstruction-based methods encode entire time sequences into an embedding space. Anomaly labels are then inferred based on the reconstruction error between the original data and the reconstructed version. Since these approaches operate and need to reconstruct the entire time series, their performance heavily relies on the capabilities of the reconstruction model <cit.>. In cases where the original data exhibit heterogeneity, complexity, and inter-dependencies, reconstruction-based methods may encounter challenges in achieving low overall reconstruction error and variance <cit.>. As a result, the anomaly detection performance of such approaches may be sub-optimal. Given these considerations, there is a clear need to rethink and enhance forecasting and reconstruction approaches to achieve accurate and robust anomaly detection.
To address these challenges and overcome the limitations of existing approaches, we propose a novel anomaly detector named . This detector combines the use of time series imputation <cit.> and diffusion models <cit.> to achieve accurate and robust anomaly detection. employs dedicated grating data masking into the time series data, creating unobserved data points. It then utilizes diffusion models to accurately model the time series and impute the missing values caused by the data masking. The imputation error is subsequently used as an indicator to determine the presence of anomalies. The imputation-based approach employed by offers distinct advantages over forecasting and reconstruction methods. Firstly, it leverages neighboring values in the time series as additional conditional information, enabling a more accurate modeling of the temporal and inter-correlated dependencies present in multivariate data. Secondly, the reference information from neighboring values helps to reduce uncertainty in predictions, thereby enhancing the robustness of the detection process. Figure <ref> presents an example in which forecasting, reconstruction, and imputation methods are employed to predict a time series using diffusion models. Observe that the imputation-based method exhibits the highest prediction performance especially on normal period, thus only the imputation method successfully identifies the period of anomaly. We therefore employ time series imputation for accurate self-supervised modeling of time series, which forms the foundation of our proposed .
To enhance the performance of anomaly detection, leverages the exceptional unsupervised modeling capability of diffusion models <cit.> for imputation. Diffusion models have demonstrated superior performance in unsupervised image generation, surpassing traditional generative models such as GANs <cit.> and VAEs <cit.>. They have also been successfully applied to model complex temporal and inter-metric dependencies in MTS, showcasing remarkable abilities in forecasting <cit.> and imputation <cit.>. We employ a dedicated diffusion model as the time series imputer, replacing traditional forecasting and reconstruction models. This brings several advantages to the anomaly detection, namely (i) it enables better modeling of complex correlations within MTS data; (ii) it allows for stochastic modeling of time series through the noise/denoising processes involved in the imputation; (iii) the step-by-step outputs generated during the imputation inference serve as additional signals for determining the anomaly labels in an ensemble manner. These unique advantages of diffusion models enable precise capturing of the complex dependencies and inherent stochasticity present in time series data, and further enhance the robustness of anomaly detection through ensembling techniques.
By integrating imputation and diffusion models, our proposed achieves exceptional accuracy and timeliness in anomaly detection for both offline and online evaluation in real production. Overall, this paper presents the following contributions:
* We introduce , a novel framework based on the imputed diffusion model, which accurately captures the inherent dependency and stochasticity of MTS data, leading to precise and robust anomaly detection.
* We develop a grating masking strategy to create missing values in the data for imputation. This strategy enhances the decision boundary between normal and abnormal data, resulting in improved anomaly detection performance.
* leverages the step-by-step denoised outputs of the diffusion model's unique inference process as additional signals for anomaly prediction in an ensemble voting manner. This approach further enhances inference accuracy and robustness.
* We conduct extensive experiments comparing with 10 state-of-the-art anomaly detection baselines on 6 datasets. Results show that significantly outperforms other approaches in terms of both detection accuracy and timeliness.
* We integrate in the real production of the Microsoft email delivery microservice system. The framework exhibits a 11.4% higher detection accuracy compared to the legacy online approach, which significantly improves the system's reliability.
To the best of our knowledge, our proposed is the pioneering approach that combines imputation-based techniques with time series anomaly detection. Additionally, it is the first instance in which diffusion models have been applied to the field of time series anomaly detection.
§ RELATED WORK
We provides an overview of relevant research in time series anomaly detection <cit.> and diffusion models <cit.> in this section.
§.§ Time Series Anomaly Detection
Time series anomaly detection is an important problem that has received significant attention from both the industrial and research communities <cit.>. Approaches for this area can be categorized into five main classes based on the underlying detection method <cit.>. These categories include: (i) forecasting methods (<cit.>), which utilize predictions to identify anomalies; (ii) reconstruction methods (<cit.>), which reconstruct the time series and identify anomalies based on the reconstruction error; (iii) encoding methods (<cit.>), which encode the time series into a different representation and detect anomalies using this encoding; (iv) distance methods (<cit.>), which measure the dissimilarity between time series and identify anomalies based on the distance; (v) distribution methods (<cit.>), which model the distribution of the time series data and detect anomalies based on deviations from the expected distribution; and (vi) isolation tree methods (<cit.>), which use tree-based structures to isolate anomalies from normal data.
Among the various approaches explored in the literature, forecasting and reconstruction methods have gained significant popularity due to their reported effectiveness. For instance, Omnianomaly <cit.> employs a combination of GRU and VAE to learn robust representations of time series. It also utilizes the Peaks-Over-Threshold (POT) method to dynamically select appropriate thresholds for anomaly detection. MTAD-GAT <cit.> incorporates a graph-attention network to capture both feature and temporal correlations within time series data. By combining forecasting and reconstruction models, it achieves improved anomaly detection performance. MAD-GAN <cit.> takes advantage of the discriminator's loss in a GAN as an additional indicator for detecting anomalies. More recently, TranAD <cit.> introduces attention mechanisms in transformer models and incorporates adversarial training to jointly enhance the accuracy of anomaly detection.
The introduced in this paper distinguishes from previous approaches by employing time series imputation to improve time series modeling and enhance anomaly detection performance.
§.§ Diffusion Model
Recently, diffusion models <cit.> have garnered increasing attention in the field of AI generated content <cit.>. While their potential in the domain of time series modeling and anomaly detection is relatively new, researchers have begun to explore their application in these areas. For instance, CSDI <cit.> utilizes a probabilistic diffusion model for time series imputation, outperforming deterministic baselines. TimeGrad <cit.> applies diffusion models in an autoregressive manner to generate future time sequences for forecasting. This approach achieves good performance in extrapolating into the future while maintaining computational tractability. Additionally, diffusion models have been employed in time series generation. In <cit.>, diffusion models are used as score-based generative models to synthesize time-series data, resulting in superior generation quality and diversity compared to baseline approaches.
Diffusion models have also been explored for image anomaly detection. In <cit.>, denoising diffusion implicit models <cit.> are combined with classifier guidance to identify anomalous regions in medical images. This produces highly detailed anomaly maps without the need for a complex training procedure. Similarly, in <cit.>, diffusion models are used to eliminate bias and mitigate accumulated prediction errors, thereby enhancing anomaly segmentation in CT data. The DiffusionAD <cit.> formulates anomaly detection as a “noise-to-norm” paradigm, requiring only one diffusion reverse process step to achieve satisfactory performance in image anomaly detection. This significantly improves the inference efficiency.
To the best of our knowledge, our proposed represents the first utilization of diffusion models specifically tailored for the task of MTS anomaly detection.
§ PRELIMINARY
In this section, we present the problem of MTS anomaly detection and time series imputation, and an overview of diffusion models.
§.§ Multivariate Time Series Anomaly Detection
We consider a collection of MTS denoted as 𝒳, which encompasses measurements recorded from timestamp 1 to L. Specifically:
𝒳 = {𝐱_1, 𝐱_2, ⋯𝐱_L},
where 𝐱_l ∈ℝ^K represents an K-dimensional vector at time l, 𝐱_l = { x_l^1, x_l^2, ⋯ x_l^K}. The objective of MTS anomaly detection is to determine whether an observation 𝐱_l is anomalous or not. By employing y_l ∈{0 ,1} to indicate the presence of an anomaly (with 0 denoting no anomaly and 1 denoting an anomaly), the goal transforms into predicting a sequence of anomaly labels for each timestamp, namely Y = {y_1, y_2, ⋯, y_L}. Our initially estimates an anomaly score τ_l for each observation 𝐱_l, and subsequently the anomaly prediction y_l is obtained through a thresholding mechanism, which will be discussed in detail in the subsequent sections.
§.§ Time Series Imputation
leverages the prediction error resulting from the imputation <cit.> of intentionally masked values within a time series to infer the anomaly labels. The mask is denoted as ℳ = { m_l ∈ 1:L, k ∈ 1:K}∈{0, 1}, where m = 1 indicates that x_l^k is observed, while 0 signifies that it is missing. The mask ℳ possesses the same dimensionality as the time series 𝒳, specifically ℳ∈ℝ^T× K. The application of the mask ℳ to the original time series 𝒳 yields a new partially observed masked time series denoted as 𝒳^ℳ, which can be expressed as:
𝒳^ℳ = 𝒳⊙ℳ.
Here, the symbol ⊙ represents the Hadamard product. Let 𝒳^ℳ_0 represent the masked value where m_l,k = 0, and 𝒳^ℳ_1 represent the observed values where m_l,k = 1, the objective of the imputation process is to estimate the missing values in 𝒳^ℳ, p(𝒳^ℳ_0|𝒳^ℳ_1). Several tasks, such as interpolation <cit.> and forecasting <cit.>, can be considered as instances of time series imputation.
§.§ Denoising Diffusion Model
Our is based on the diffusion models <cit.>, a well-known generative model that draws inspiration from non-equilibrium thermodynamics. Diffusion models follow a two-step process for data generation. Firstly, it introduces noise to the input incrementally, akin to a forward process. Secondly, it learns to generate new samples by progressively removing the noise from a sample noise vector, thereby resembling a reverse process.
During the forward process, Gaussian noise is incrementally added to the initial input sample 𝒳_0 over T steps. Mathematically, this can be represented as q(𝒳_1:T|𝒳_0) := ∏_t=1^T q(𝒳_t|𝒳_t-1),
where
q(𝒳_t|𝒳_t-1) := 𝒩(𝒳_t;√(1-β_t)𝒳_t-1, β_t𝐈).
Here, β is a positive constant that can either be learned or predefined, representing the noise level. It is important to note that the forward process is parameterized as a Markov chain, as the values of 𝒳_t solely depend on 𝒳_t-1. The final step, 𝒳_T, is fully corrupted and becomes random noise. Its distribution can be expressed in closed form as q(𝒳_T|𝒳_0) = 𝒩(𝒳_T;√(α_t)𝒳_0, (1-α_t)𝐈), where α_t := 1 - β_t and α_t := ∏_i=1^t α_i. Next, 𝒳_T can be represented as 𝒳_T = √(α_T)𝒳_0 + (1-α_T)ϵ, where ϵ∼𝒩(0,𝐈).
Conversely, we employ a machine learning model with learnable parameters Θ to denoise 𝒳_T and reconstruct 𝒳_0. This is accomplished by iteratively computing the following Gaussian transitions:
p_Θ (𝒳_t-1|𝒳_t) := 𝒩(𝒳_t-1; μ_Θ(𝒳_t, t), Σ_Θ(𝒳_t, t)𝐈).
As a result, the joint distribution can be expressed as p_Θ(𝒳_0:T) = p(𝒳_T)∏_t=1^T p_Θ (𝒳_t-1|𝒳_t).
The Denoising Diffusion Probabilistic Model (DDPM) <cit.> simplifies the reverse process by adopting a fixed variance, resulting in the following expressions:
μ_Θ(𝒳_t, t) := 1/α_t (𝒳_t - β_t/√(1-α_t)ϵ_Θ(𝒳_t, t) ), Σ_Θ(𝒳_t, t) = √(β_t).
Here β_t = {1-α_t-1/1-α_tβ_t, t>1
β_1, t=1
., and ϵ_Θ represents a trainable denoising function. By employing Jensen's inequality and the speeding-up parameterization from DDPM, the reverse process can be solved by training a model to optimize the following objective function:
min_Θℒ(Θ) := min_Θ𝔼_𝒳_0∼ q(𝒳_0), ϵ∼𝒩(0, 𝐈),t||ϵ -ϵ _Θ(𝒳_t,t)||^2,
where 𝒳_t = √(α_t)𝒳_0 + (1-α_t)ϵ. We refer to <cit.> for the proof details. The denoising function ϵ_Θ is responsible for estimating the noise added to the corrupted input 𝒳_t. Once trained, given an arbitrary noise vector 𝒳_T , we can generate a new sample by progressively denoising using 𝒳_t and obtain a final complete sample 𝒳_0.
§ THE DESIGN OF
relies on time series imputation and utilizes the imputed error as a signal for anomaly detection. The imputation process is carried out in a self-supervised learning manner, where we intentionally introduce masks to the MTS, creating missing values that need to be imputed. We then train a diffusion model using designed specifically for imputation and subsequent anomaly detection tasks. During the inference phase, we leverage the intermediate output of the at different denoising steps t as additional information to collectively determine the anomaly label. This ensemble approach enhances the accuracy and robustness of , further improving its performance.
§.§ Imputed Diffusion Models
Time series anomaly detection often relies on the construction of prediction models that accurately capture the distribution of normal data. These models are expected to exhibit higher prediction errors when anomalies occur, thereby serving as indicators and providing a decision boundary for detecting anomalies. Two commonly used types of prediction models for anomaly detection are (i) reconstruction models, which encode the entire time series into a representation that can be reconstructed using a decoder; (ii) forecasting models, which aim to predict future values of the time series based on historical observations <cit.>.
However, both types of prediction models have their limitations in terms of their capacity for time series modeling. The reconstruction models operate on the entire MTS, which leads to high uncertainty in their predictions. Similarly, forecasting models face challenges in accurately predicting future values, especially in the presence of anomalies, further contributing to uncertainty in their predictions.
To overcome the limitations of traditional reconstruction <cit.> and forecasting models <cit.>, we propose the use of time series imputation as the underlying prediction model for anomaly detection in . We further enhance the imputation capacity of the model by incorporating state-of-the-art diffusion models. This approach offers several advantages. Firstly, it enables enhanced estimation of the data distribution by leveraging the availability of unmasked data values. This leads to improved understanding of the underlying data distribution. Secondly, the imputation-based prediction process stabilizes the inference of the diffusion model, resulting in reduced variance in its predictions. This increased stability enhances the reliability of the model's predictions. Lastly, incorporating imputation-based prediction improves the overall accuracy and robustness of subsequent anomaly detection. By combining diffusion models for time series imputation, we achieve accurate modeling of time series data, resulting in superior performance in anomaly detection tasks.
We begin by introducing the use of score-based diffusion models for MTS data imputation <cit.>. There are two main categories of diffusion models employed for time series imputation, distinguished by the type of input information they utilized as follows:
* Conditioned Diffusion Models: These models estimate the masked values conditioned on the observed data, specifically p(𝒳^ℳ_0|𝒳^ℳ_1). In this case, the observed values 𝒳^ℳ_1 are not corrupted by noise and are directly provided as input in the reverse process.
* Unconditional Diffusion Models: For unconditional imputed diffusion models introduced in <cit.>, both masked and unmasked values are corrupted by noise in the forward process. Instead of directly providing the observed data, it retains the Gaussian noise added during the forward process. They use the ground-truth noise added to the unmasked values as reference inputs. This leads to the estimation of p(𝒳^ℳ_0|ϵ^ℳ_1_1:T), where ϵ^ℳ_1_1:T represents the noise sequence added to the unmasked values 𝒳^ℳ_1 during the forward process.
Conditional diffusion models generally outperform unconditional diffusion models in the task of imputation, resulting in lower overall prediction errors <cit.>. This is because conditional models benefit from the direct inclusion of ground-truth unmasked data as input, which serves as reliable references for neighboring values. However, it is important to recognize the distinction between the objectives of imputation and anomaly detection. While imputation aims to minimize the error between predictions and ground truth for all data points, anomaly detection requires a clear boundary between normal and abnormal points, achieved by minimizing imputation errors only for normal data and maximizing errors for anomaly points. During the inference phase, when anomaly points happen to be unmasked and used as inputs for the prediction model, the prediction error for neighboring anomaly points is also reduced. Consequently, the prediction error becomes indistinguishable between normal and abnormal points, compromising the effectiveness of subsequent anomaly detection. The existence of unmasked anomaly points during inference blurs the clear boundary in the prediction error that is vital for accurate anomaly detection.
To address this issue, we employ unconditional imputed diffusion models, which utilize the forward noise ϵ^ℳ_1_1:T as a reference for unmasked data input, rather than directly feeding the data values. By using the forward noise, we avoid explicitly revealing the exact values, even when anomaly points are unmasked. However, the forward noise still provides indirect information about the unmasked data, serving as a weak hint for the model. The unmasked data can be perfectly recovered in the reverse process by subtracting the noise from the observed values step-by-step. The lower subplot in Figure <ref> demonstrates the application of an unconditional diffusion model. A notable distinction from the conditional model (upper subplot) is the substantial difference in imputed error between normal and abnormal data points. This significant gap in imputed error values provides a distinct boundary for the thresholding approach, which improves the anomaly detection performance.
We denote the noise added to the unmasked input from step t-1 to t as ϵ^ℳ_1_t. Note that ϵ^ℳ_1_t is drawn from the same Gaussian distribution in Eq. (<ref>), and serve as the reference for unmasked data in the reverse inference process. Similar to Eq. (<ref>), the unconditional imputed diffusion models estimate the masked values in a reverse denoising fashion but condition on the ϵ^ℳ_1_t as additional input.
Traditional diffusion models lack the capability to incorporate conditional information ϵ^ℳ_1_t during the denoising process. Consequently, an enhancement is required in order to extend the estimation in Eq. (<ref>) to accommodate conditional information. This can be achieved by modifying the estimation as follows:
μ_Θ (𝒳^ℳ_0_t, t |ϵ^ℳ_1_t ) = μ (𝒳^ℳ_0_t, t, ϵ_Θ (𝒳^ℳ_0_t, t |ϵ^ℳ_1_t ) ),
Σ_Θ (𝒳^ℳ_0_t, t |ϵ^ℳ_1_t ) = Σ (𝒳^ℳ_0_t, t ).
By utilizing the denoising function ϵ_Θ and the forward noise for unmasked value ϵ^ℳ_1_t, we can leverage the reverse denoising process of imputed diffusion models to infer the masked values 𝒳^ℳ_0_0. This is accomplished by sampling from the distribution of 𝒳^ℳ_0_t. In contrast to anomaly detection methods based on reconstruction and forecasting, the integration of additional information offers valuable signals that assist the diffusion model in generating more reliable predictions. This leads to a reduction in output randomness and variance, while maintaining the confidentiality of abnormal data values. Consequently, it enhances the performance and robustness of subsequent anomaly detection. In the subsequent sections, we will thoroughly explore the design of the masking strategy ℳ, as it plays a crucial role in determining the overall performance of the approach.
§.§ Design of Data Masking
The approach leverages deliberate masking, using a mask ℳ applied to the time series data, to create unobserved points that require imputation. The choice of the masking strategy plays a crucial role in determining the performance of anomaly detection. In this paper, we compare two masking strategies:
* Random strategy: This strategy randomly masks data values in the raw time series with a 50% probability <cit.>. It provides a straightforward and simple masking technique.
* Grating strategy: The grating strategy masks the data at equal intervals along the time dimension, as illustrated in Figure <ref>. The raw time series is divided into several windows, with masked and unmasked windows appearing in a staggered manner.
For the grating strategy depicted in Figure <ref>, two different mask policy indexed by p ∈ (0, 1) are applied to the same time series, resulting in two imputation instances. These two masks are mutually complementary, ensuring that the masked values in mask p=0 are unmasked in mask p=1, and vice versa. This guarantees that all data points are imputed by the approach, enabling the generation of prediction error signals for anomaly detection. After performing imputation on each masked series individually, the imputation results are merged through simple concatenation. During training and inference, the masking index p is provided to the model, indicating the masking policy applied to reduce ambiguity. This leads to an additional conditional term p on Eq. (<ref>) and (<ref>), while the estimation of Σ_Θ in Eq. (<ref>) remains unchanged,
p_Θ (𝒳^ℳ_0_t-1|𝒳^ℳ_0_t, ϵ^ℳ_1_t ) := 𝒩 (𝒳^ℳ_0_t-1; μ_Θ (𝒳^ℳ_0_t, t|ϵ^ℳ_1_t, p ).,
. Σ_Θ (𝒳^ℳ_0_t, t|ϵ^ℳ_1_t, p )𝐈 ),
μ_Θ (𝒳^ℳ_0_t, t |ϵ^ℳ_1_t ) = μ (𝒳^ℳ_0_t, t, ϵ_Θ (𝒳^ℳ_0_t, t |ϵ^ℳ_1_t, p ) ).
The grating strategy introduces a unique characteristic to the imputation, as it can be considered a “partially” reconstruction task. This approach offers several advantages: Firstly, it provides additional information that aids in modeling the time series more effectively. By incorporating the partially reconstructed data, the model gains a better understanding of the underlying patterns and correlations. Secondly, the utilization of the grating strategy allows for a partial glimpse into the future values of the time series within the masked window, akin to forecasting techniques. This enables to improve timeliness in detecting anomalies, as it provides insights into the potential future trajectory of the time series data. The benefits of the grating strategy will be demonstrated in Section <ref>.
§.§ Training Process of
The training process of is illustrated in Figure <ref>. Starting with a given data sample time series 𝒳, we generate two masked samples 𝒳^ℳ using the grating masking strategy discussed in Section <ref>. These masked samples are gradually corrupted by introducing Gaussian noise ϵ_t, resulting in 𝒳^ℳ_t. Our objective is to train a model μ_Θ that can effectively denoise 𝒳^ℳ_t and impute the masked values in a step-by-step manner. Given that employs an unconditional diffusion model, the input series 𝒳^in_t is divided into two halves, 𝒳^in = {𝒳_t^ℳ_0, ϵ _t^ℳ_1}. One half contains the corrupted data within the masked regions, denoted as 𝒳^ℳ_0_t, while the other half represents the ground truth forward noise applied to the unmasked regions, denoted as ϵ^ℳ_1_t, serving as a reference. These two data sources are concatenated to form the input 𝒳^in_t, which is then fed into the dedicated transformer-based model for denoising inference, ϵ_Θ ( 𝒳^ℳ_0_t, t |ϵ^ℳ_1_t, p ).
The unconditional imputed diffusion models utilize the same parameterization as Eq. (<ref>), with the only difference lying in the form of μ_Θ, which takes additional inputs of unmasked forward noise ϵ^ℳ_1_t and mask index p. We follow the standard training process of DDPM, beginning with sampling Gaussian noise as masked data at step T, 𝒳_T = √(α_T)𝒳_0 + (1-α_T)ϵ, and optimizing ϵ_Θ by minimizing the following loss function:
min_Θℒ(Θ) := min_Θ𝔼_𝒳_0∼ q(𝒳_0), ϵ∼𝒩(0, 𝐈),t||ϵ -ϵ _Θ(𝒳^ℳ_0_t,t |ϵ^ℳ_1_t, p)||^2.
Once trained, we can utilize the diffusion model to infer the masked values given a random Gaussian noise 𝒳_T^ℳ_0, as well as the forward noise sequence added to the unmasked data ϵ^ℳ_1_1:T.
§.§ Imputation with
Drawing inspiration from the studies conducted in <cit.>, which employ hierarchical structures of transformers <cit.> to capture temporal correlations and interactions among variables, we introduce , a specialized architecture designed for MTS imputation. The architecture of is illustrated in Figure <ref>. It comprises a series of stacked residual blocks, with each block containing dedicated components that process the feature and temporal dimensions separately.
The model incorporates four distinct groups of input data: (i) the input time series 𝒳_t^in, (ii) diffusion embedding that encodes information related to the current diffusion step t, (iii) masking embedding that encodes the masking group p of the current data, and (iv) complementary information that embeds the dimensional information of time l and feature k. Each of these groups of data is individually processed by convolutional and/or multilayer perceptron (MLP) layers to ensure a consistent dimensionality. The embeddings of the first three inputs are then combined into a single tensor and further processed by a temporal transformer and a spatial transformer layer.
The temporal transformer plays a crucial role in capturing the temporal dependencies within the time series <cit.>. It enables the dynamic weighting of feature values at different time steps and takes into account the masked status of features. The attention mechanism employed in the temporal transformer provides the necessary flexibility for this purpose. Additionally, a 1-layer spatial transformer is employed to capture the interdependencies between different variables at each time step. This spatial transformer allows for adaptive weighting and facilitates interaction between variables. The output of the spatial transformer is combined with the complementary information, creating a residual head for skip connection, and an output head that is connected to the next residual block, as illustrated in Figure <ref>. Both the spatial and temporal transformers play crucial roles in the imputation and anomaly detection tasks, as the feature and temporal dimensions may contribute differently to the predictions <cit.>, which can be learned by the attention mechanism <cit.>. The incorporation of a residual structure <cit.> further enhances the model capacity by facilitating gradient propagation and enabling better prediction performance.
§.§ Ensemble Anomaly Inference
Traditional anomaly detection models typically rely on a single signal, the prediction error, to determine the anomaly label for testing data. However, relying solely on one signal can lead to unrobust predictions, as the prediction error can be subjective to stochasticity and affected by various random factors. In order to address this limitation, we leverage the unique advantage of diffusion models. Unlike traditional models that provide a single-shot prediction, imputed diffusion models progressively denoise the masked data over T steps. This results in at least T intermediate outputs, each having the same dimension as the original time series, which is not available in traditional models. Although these intermediate outputs are not fully denoised, they converge towards the same imputation objective and offer different perspectives on the time series modeling. By appropriately utilizing these intermediate outputs, we can uncover the step-by-step reasoning of and utilize them as additional signals to enhance the robustness and accuracy of the anomaly detection task.
utilizes the prediction error at each denoising step t, denoted as ℰ = {E_1, E_2, ⋯, E_T}, as input and ensembles them using a function f(ℰ) to determine the final anomaly labels. E_t denotes the prediction error tensor for the imputed output at denosing step t, and it has the same dimension as the original time series 𝒳. The ensemble anomaly inference algorithm is presented in Algorithm <ref>, and Figure <ref> provides an illustration of the process. At each denoising step, generates a prediction using the denoising model ϵ_Θ and computes the prediction error E_t with respect to the ground truth time series 𝒳. The set ℰ collects the prediction errors at each denoising step, and an ensemble function f(ℰ) is employed to leverage the all-step errors to obtain the final voting signal 𝒱 for determining the anomaly label, 𝒱 = f(ℰ).
The design of the ensemble function. utilizes a voting ensemble mechanism <cit.> to strengthen the overall anomaly detection process by aggregating anomaly predictions from each denoising step. At each denoising step t, the anomaly prediction label Y_t is determined using the following equation:
Y_t = 1(E_t ≥τ_t), where τ_t = ∑ E_T /∑ E_t·τ_T.
Here, τ_T represents the upper percentile of imputed errors at the final denoising step T. The rationale behind this design is to utilize the imputed error at the last step as a baseline and use it as an indicator of imputation quality. The rescaling ratio ∑ E_T /∑ E_t measures the imputation quality at each step t. If the ratio is small, it indicates poor imputation quality, and therefore the upper percentile of imputed errors for determining the anomaly label is reduced. In this case, only the label for the timestamp with the highest imputed error and high confidence is retained. Conversely, if the ratio is small, it suggests good imputation quality, and the error threshold for anomaly detection is relaxed. This dynamic adjustment of the threshold allows for adaptability based on the quality of imputation.
Using Eq. (<ref>), we derive the step-wise anomaly predictions Y_t = {y_t, 1, ⋯, y_t, L}, where y_t, l=1 indicates that the data at time step l is predicted as an anomaly using the imputation at diffusion step t, and y_t, l=0 otherwise. To determine the final anomaly prediction at each time step l, we employ a voting mechanism. The voting signal 𝒱_l represents the total number of anomaly votes received at time step l, given by 𝒱_l = ∑_t=1^T y_t, l. If a time step receives more than ξ votes as an anomaly across all denoising steps, it is marked as a final anomaly, denoted as y_l = 1(𝒱_l > ξ). To optimize inference efficiency and ensure correctness, we sample every 3 steps from the last 30 denoising steps for the voting process. This voting mechanism strengthens the framework by utilizing the intermediate imputed outputs as additional signals. This is unique to diffusion models, as they generate predictions progressively.
§ OFFLINE EVALUATION
We conducted a comprehensive offline evaluation of the for MTS anomaly detection. The evaluation aimed to address the following research questions (RQs):
* RQ1: How does perform compare to state-of-the-art methods in MTS anomaly detection?
* RQ2: How effective are each specific design in ?
* RQ3: What insights can be gained from each mechanism employed in ?
Implementation. The framework is implemented using the framework <cit.> and trained on a GPU cluster comprising multiple NVIDIA RTX 1080ti, 2080ti, and 3090 accelerators[The code for reproducing the results of this paper is available at <https://github.com/17000cyh/IMDiffusion.git>.]. Table <ref> presents the key hyperparameters used in . The detection thresholds τ for the MSL dataset vary across different subsets, while a fixed value of 0.02 is employed for the other datasets. The voting threshold ξ is dataset-dependent and is specified in the provided code link. As for the baseline models, their hyperparameters and detection thresholds are set based on the information provided in their respective original papers. In cases where these details were not explicitly mentioned, a grid search was conducted to determine the optimal values.
§.§ Datasets
We test the performance of using 6 publicly available MTS anomaly detection datasets, namely SMD <cit.>, PSM <cit.>, MSL <cit.>, SMAP <cit.>, SWaT <cit.> and
GCP <cit.>. We summarize their statistics in Table <ref> and detail them below.
* Server Machine Dataset (SMD) contains stacked traces of resource utilizations from a compute cluster in a large Internet company. It spans a duration of five weeks and includes data from 28 machines (subsets) <cit.>.
* Pooled Server Metrics (PSM) dataset, proposed by eBay, comprises 26-dimensional data captured internally from application server nodes <cit.>.
* Mars Science Laboratory (MSL) dataset consists of telemetry information and sensor/actuator data from the Mars rover, including soil samples. It is specifically collected for research purposes related to detecting anomalies <cit.>.
* Soil Moisture Active Passive (SMAP) dataset is obtained from the Mars rover by NASA and includes telemetry and soil. It is also used for detecting anomalies in the rover's operations <cit.>.
* Secure Water Treatment (SWaT) dataset is generated from a testbed and comprises sensor values and actuator operations. It covers both normal operation and scenarios with attack scenarios, spanning seven days and four days respectively <cit.>.
* Global Content Platform (GCP) comprises multiple performance metrics collected from 30 online service systems over a span of seven weeks. The data is sampled at a frequency of once every five minutes <cit.>.
In order to ensure the completeness of the experiment, we trained and evaluated the on all subsets of the aforementioned dataset, rather than selectively choosing non-trivial sequences as done in <cit.>. This may lead to differences in the evaluation metrics compared to the results reported in the original paper.
§.§ Baselines & Evaluation Metrics
We evaluate the performance of by comparing it with 10 state-of-the-art MTS anomaly detection models:
(i) Isolation forest (IForest) <cit.> separates the anomaly data point with others for detection.
(ii) BeatGAN <cit.> utilizes generative adversarial networks (GANs) <cit.> to reconstruct time series and detect anomalies.
(iii) LSTM-AD <cit.> employs LSTM <cit.> to forecast future values and uses the prediction error as an indicator of anomalies.
(iv) InterFusion <cit.> captures the interaction between temporal information and features to effectively identify inter-metric anomalies.
(v) OmniAnomaly <cit.> combines GRU <cit.> and VAE <cit.> to learn robust representations of time series and utilizes the Peaks-Over-Threshold (POT) <cit.> method for threshold selection.
(vi) GDN <cit.> introduces graph neural networks into anomaly detection and leverages meta-learning methods to combine old and new knowledge for anomaly identification.
(vii) MAD-GAN <cit.> employs GANs <cit.> to recognize anomalies by reconstructing testing samples from the latent space.
(viii) MTAD-GAT <cit.> utilizes Graph Attention Network (GAT) <cit.> to model MTS and incorporates forecasting-based and reconstruction-based models to improve representation learning <cit.>.
(ix) MSCRED <cit.> uses ConvLSTM networks <cit.> to capture correlations among MTS and operates as an anomaly detector.
(x) TranAD <cit.> leverages transformer models to perform anomaly inference by considering the broader temporal trends in the data.
We benchmark against these models to evaluate its performance in MTS anomaly detection.
In line with previous studies <cit.>, we evaluate the anomaly detection accuracy of both the baseline models and our proposed using precision, recall, and F1 score. It is important to note that we conducted 6 independent runs for each baseline model and , and report the average performance. Additionally, we provide the standard deviation of the F1 score (F1-std) in the 6 runs to assess the stability and robustness of all methods examined in this investigation.
We also utilize the R-AUC-ROC evaluation metric introduced in <cit.> to provide a threshold-independent accuracy assessment tailored to range-based anomalies. This metric mitigate the bias introduced by threshold selections and offers a different perspective on the performance of anomaly detection methods by using continuous buffer regions.
Further, we utilize the Average Sequence Detection Delay (ADD) metric proposed in <cit.> to evaluate the speed and timeliness of anomaly detection provided by each approach. The ADD metric is defined as follows:
ADD = 1/S∑_i=1^S (𝒯_i - ϱ_i),
where ϱ_i represents the start time of anomalous event i, 𝒯_i ≥ϱ_i denotes the corresponding detection delay time by the anomaly detector, and S indicates the total number of anomalous events. A smaller value of ADD indicates a more timely detection of anomalies, which is crucial in real-world detection scenarios.
§.§ Anomaly Detection Performance (RQ1)
§.§.§ Accuracy Performance
We first present the precision, recall, F1 and R-AUC-PR performance of and the baseline methods in Table <ref> for each of the six datasets considered in this study. Please note that all the results presented in the table are the average values obtained from 6 individual runs, which allows us to assess the robustness of each detector. Additionally, the F1-std. (standard deviation) provides an indication of the variability of the F1 scores across these runs. represents the standard deviation across the 6 runs. Additionally, Table <ref> displays the average performance across all six datasets. Methods exhibiting best performance are highlighted in bold. Notably, overall demonstrates exceptional performance in terms of all evaluation metrics, namely precision (92.98%), recall (93.01%), and F1 score (92.84%) and R-AUC-PR (29.86%). It achieves the highest average scores across six datasets, surpassing the performance of the other baseline methods. In particular, exhibits at least a 2.4% increase in precision, a 4.67% increase in recall, a 3.97% increase in F1 score and a 4.85% increase in and R-AUC-PR compared to the other baselines. These results demonstrate the effectiveness of the imputation approach and diffusion models employed in .
Furthermore, despite the fact that diffusion models require sampling at every denoising step, introducing randomness, the F1-std (0.0083) calculated from 6 independent runs remains relatively small compared to other baseline methods, ranking second lowest among all approaches. This indicates the remarkable robustness of . This can be attributed to two key design elements incorporated in : (i) the imputation methods leverage neighboring information for self-supervised modeling, reducing prediction uncertainty, and (ii) the dedicated ensemble mechanism aggregates votes for step-wise anomaly inference, further reducing prediction variance. We provide a more detailed ablation study on these in Section <ref> and <ref>. Additionally, the sampling involved in the denoising process of helps capture the stochasticity in time series, ultimately enhancing overall anomaly detection performance.
Upon closer examination of the dataset-specific performance in Table <ref>, we observe that achieves the highest F1 score in 5 out of the 6 datasets considered in this study. The exception is the MSL dataset, where InterFusion and TranAD outperform . This can be attributed to the fact that these two baselines are specifically designed to capture the internal correlations across different dimensions, which are the prominent characteristics of the MSL dataset. In contrast, takes a different approach by leveraging the exceptional self-supervised learning ability of diffusion models to capture correlations and provide a more general solution across various datasets. This enables to achieve competitive performance in most datasets and surpass other baselines in terms of F1 score. Furthermore, it is worth noting that also achieves the highest R-AUC-PR in 4 out of the 6 datasets considered in this study. The only exceptions are SMAP and SWaT, where it is surpassed by several baselines. This highlights the robustness of to threshold selection and its consistent ability to deliver accurate predictions. Moreover, the superior performance of in detecting range anomalies, as assessed by the R-AUC-PR metric, further underscores its suitability for such tasks.
Notably, the performance improvements achieved by are particularly remarkable in the SMD and PSM datasets, where it outperforms other baselines by at least 6.8% and 5.9% in terms of F1 score and 6.21% and 2.19% in terms of R-AUC-PR, respectively. These two datasets exhibit small distribution deviations between anomalous and normal data <cit.>, and 's unconditional imputation design effectively amplifies the gap in imputed error between normal and abnormal data, contributing to its superior performance. More detailed ablation analysis on this aspect is shown in Section <ref>. Furthermore, consistently outperforms other baselines with low F1-std. in the SWaT, SMAP, and GCP datasets. This demonstrates the remarkable accuracy and robustness of , highlighting the effectiveness of combining imputation and diffusion models for MTS anomaly detection. Moreover, these validate the superiority of over forecasting and reconstruction baselines.
§.§.§ Timeliness Performance
In Table <ref>, we present the ADD (mean±std.) performance of all approaches across all datasets over 6 runs, along with their average values. Recall that the ADD metric measures the timeliness of anomaly detection, with lower values indicating faster detection. Observe that demonstrates remarkable performance in this aspect as well. Overall, achieves the lowest average ADD values (104) with low variance, surpassing other baselines by at least 39.9%. Upon closer examination of dataset-specific performance, it is observed that consistently outperforms other baselines in 4 out of 6 datasets. This indicates that is highly sensitive to abnormal points and can capture them at the earliest detection timing. This superior performance can be attributed to the grating masking design employed by , which enables to partially envision the future values of the time series in the masked regions. A more detailed ablation analysis on the masking strategy is presented in Section <ref>. The ADD metric holds significant importance in industrial practice, as early detection of anomalies allows for prompt mitigation of failures <cit.>, potentially preventing more severe consequences. The advantage of in achieving faster anomaly detection makes it a suitable choice for real-world deployment in systems with high reliability requirements.
§.§ Ablation Analysis (RQ2, RQ3)
Next, we conduct a comprehensive ablation analysis to evaluate the effectiveness of each design choice in , shedding light on how these design choices contribute to enhancing the anomaly detection performance. The aggregated results specific to each dataset are presented in Table <ref>, while Table <ref> showcases the average results across all datasets. The reported results are the average of 6 independent runs. Note that in the tables, “” represents the combination of the following designs: Imputation, Ensembling, Unconditional, and Grating Masking.
§.§.§ Imputation vs. Forecasting vs. Reconstruction
First, we compare the anomaly detection performance of different MTS modeling approaches, namely imputation, forecasting, and reconstruction. In the case of forecasting and reconstruction, we adopt the same configuration as , with the only distinction being the forecasting method that predicts future values given historical observations, while the reconstruction method corrupts all values with noise vectors and reconstructs them. Overall, the framework, which utilizes the imputation method, achieves the highest performance in terms of accuracy and timeliness on average, outperforming other MTS modeling methods by at least 3.18% on F1 score, 1.78% on R-AUC-PR and 26.2% ADD. Comparing forecasting and reconstruction methods, we observe that forecasting outperforms reconstruction, indicating that incorporating historical information leads to improved performance of the self-supervised model, thereby enhancing the anomaly detection capability. Moreover, as shown in Table <ref>, consistently achieves the highest F1 score and ADD in 5 out of 6 datasets, and highest R-AUC-PR on 4 out of 6 datasets, highlighting the accuracy and robustness of the imputation approach.
The performance improvement can be attributed to the superior self-supervised modeling quality achieved through the imputation approach. Figure <ref> illustrates the predicted error of each modeling approach across all datasets, along with their average values. A lower prediction error signifies a more accurate modeling of MTS, which in turn enhances the performance of anomaly detection. Notably, the imputation approach consistently exhibits the lowest predicted error across all datasets, significantly outperforming the forecasting and reconstruction approaches. These results indicate the imputation approach's superior self-supervised modeling capability. They further validate the notion that enhancing the self-supervised modeling ability contributes to improved anomaly detection performance for MTS data.
§.§.§ Ensembling vs. Non-ensembling
Next, we investigate the impact of the ensembling voting mechanism on the anomaly detection performance. The non-ensembling approach solely relies on the final denoised results and applies thresholding on the imputed error for anomaly detection. In comparison, on average achieves a 0.73% higher F1 score, 6.06% higher R-AUC-PR and a lower 35.8% ADD compared to the non-ensembling approaches. This suggests that the utilization of the ensembling approach enhances both the accuracy and timeliness performance of anomaly detection, particularly for ranged anomalies. Furthermore, consistently outperforms its counterpart across all datasets. Upon closer examination of Table <ref>, we observe that exhibits a greater advantage in terms of precision over recall. This indicates that the anomalies detected through ensembling are more likely to be true anomalies, thereby reducing the false positive rate.
Figure <ref> illustrates an example on the SMD dataset, demonstrating how the ensembling mechanism improves the anomaly detection performance. The figure consists of 11 subplots, where the first 10 depicts the time series, imputation prediction, predicted error, and anomaly prediction for each of the 10 denoising steps used in the ensembling voting. The final subplot showcases the aggregated voting results for anomalies. Several key insights can be derived from the figure. Firstly, the imputation results progressively improve with each denoising step, aligning with our expectations as diffusion models perform step-by-step imputation. Secondly, relying solely on the final step can lead to false positive predictions (blue shaded area). However, the ensembling voting mechanism plays a crucial role in correcting these false positives. From step 23 to 32, the false positive data receives fewer votes compared to the true positive region (red shaded area), causing them to fall below the final voting threshold (8 votes) and be eliminated from the ensemble's predicted anomalies. This case study provides a clear illustration of how ensembling can enhance accuracy and robustness, showcasing the unique capabilities offered by diffusion models.
§.§.§ Unconditional vs. Conditional Diffusion Models
We now shift our focus to evaluating the effectiveness of the design of unconditional diffusion models described in Section <ref>. In Table <ref>, we observe that , which utilizes unconditional diffusion models, achieves superior accuracy and timeliness performance compared to its conditional counterpart, with a 2.1% higher F1 score and a 21.1% lower ADD. This gain is particularly pronounced in the SMAP dataset, which comprises shorter time sequences. On the contrary, the R-AUC-PR values obtained from both approaches are comparable, with the conditional method exhibiting a slight advantage over the unconditional version.
The improvement can be attributed to the unconditional approach, which expands the predicted error between normal and abnormal data. Figure <ref> showcases the overall predicted error, error on normal data, error on abnormal data, and the difference (abnormal data - normal data) averaged for all datasets. Note that larger difference in predicted error between abnormal and normal data generally indicates a more distinct classification boundary for thresholding approaches. This means that anomalies are more clearly separated from normal points based on their predicted error values. Notably, the unconditional approach generally yields higher overall predicted error. This aligns with our expectations, as the conditional approach makes predictions based on the ground truth time series values, providing more direct and focused guidance compared to the unconditional approach, which predicts solely based on the forward noise in the unmasked regions. However, the error difference between abnormal and normal data is amplified by the unconditional method, as illustrated in the figure. This further confirms the effectiveness of the unconditional approach, as it establishes a clearer error decision boundary for thresholding, enabling better discrimination of anomalies.
§.§.§ Grating Masking vs. Random Masking
Finally, we compare the performance of the grating masking and random masking designs as introduced in Section <ref>. Interestingly, on average, the two approaches achieve comparable F1 scores, with random masking slightly outperforming grating masking by 0.4%. The accuracy performance of the two masking designs is also quite similar across datasets. However, it is worth noting that the grating masking design consistently outperforms the random masking design in terms of R-AUC-PR. On average, the grating masking design achieves a 8.79% higher score compared to the random masking design, outperforming its counterpart on 4 out of 6 datasets, namely SMD, PSM SWaT and MSL. This result suggests that exhibits higher accuracy in detecting ranged anomalies. The is particularly relevant in real-world scenarios where such ranged anomalies occur frequently. In addition, we observe that grating masking exhibits a significant advantage in terms of ADD, with a gain of 18.4%. This can be attributed to the value envisioning property in the windowed masked regions of the grating design, which is absent in random masking. Therefore, grating masking is more suitable for industrial applications where timely detection of anomalies is crucial to ensure system reliability.
§ PRODUCTION IMPACT AND EFFICIENCY
The proposed has been integrated as a critical component within a large-scale email delivery microservice system at Microsoft. This system consists of more than 600 microservices distributed across 100 datacenters worldwide, generating billions of trace data points on a daily basis. serves as a latency monitor for email delivery, with the primary objective of detecting any delay regression in each microservice, which may indicate the occurrence of an incident. The online latency data for each microservice are sampled at a frequency of every 30 seconds. In order to assess the performance of , we deployed it online to detect anomalies in the latency evolution of multiple microservices over a period of 4 months. We compared the results obtained by with a legacy deep learning-based MTS anomaly detector, which has been in operation for years.
Table <ref> presents the online improvements achieved by compared to the legacy detector over a period of 4 months[To comply with confidentiality requirements, the actual numbers for all metrics are omitted, and only the relative improvements are reported.]. The evaluation of efficiency was conducted on containers equipped with Intel(R) Xeon(R) CPU E5-2640 v4 processors featuring 10 cores. Observe that the replacement of the legacy detector with has resulted in significant enhancements in anomaly detection accuracy and timeliness, as evidenced by the substantial improvements across all evaluation metrics. Specifically, compared to the previous online solution, exhibits a performance improvement of 11.4% in terms of F1 score, 14.4% improvement in terms of R-AUC-PR, and 30.2% reduction on ADD.
Despite the requirement of multiple inferences to obtain the final results, the online efficiency of remains well within an acceptable range. Considering that the latency data are sampled every 30 seconds, performing inference at a rate of 5.8 data points per second is more than sufficient to meet the online requirements. The notable performance improvements achieved by have made a significant impact on the Microsoft email delivery system, as they have led to considerable time savings in incident detection (TTD) <cit.>, reduced the number of false alarms triggered by the legacy approach, and ultimately enhanced the system's reliability.
§ CONCLUSION
This paper presents , a novel framework that combines time series imputation and diffusion models to achieve accurate and robust anomaly detection in MTS data. By integrating the imputation method with a grating masking strategy, the proposed approach facilitates more precise self-supervised modeling of the intricate temporal and interweaving correlations that are characteristic of MTS data, which in turn enhances the performance of anomaly detection. Moreover, employs dedicated diffusion models for imputation, effectively capturing the stochastic nature of time series data. The framework also leverages multi-step denoising outputs unique to diffusion models to construct an ensemble voting mechanism, further enhancing the accuracy and robustness of anomaly detection. Notably, is the first to employ time series imputation for anomaly detection and to utilize diffusion models in this context. Extensive experiments on public datasets demonstrate that outperforms state-of-the-art baselines in terms of accuracy and timeliness. Importantly, has been deployed in real production environments within Microsoft's email delivery system, serving as a core latency anomaly detector and significantly improving system reliability.
unsrt
|
http://arxiv.org/abs/2307.00478v1
|
20230702052432
|
Harvesting energy driven by Comisso-Asenjo process from Kerr-MOG black holes
|
[
"Mohsen Khodadi",
"David F. Mota",
"Ahmad Sheykhi"
] |
astro-ph.HE
|
[
"astro-ph.HE",
"gr-qc",
"hep-ph",
"hep-th"
] | |
http://arxiv.org/abs/2307.05329v1
|
20230704181658
|
Decoding the Popularity of TV Series: A Network Analysis Perspective
|
[
"Melody Yu"
] |
cs.SI
|
[
"cs.SI",
"cs.CL",
"cs.NI"
] |
Decoding the Popularity of TV Series: A Network Analysis Perspective
Melody Yu
Sage Hill School
California, USA
August 1, 2023
======================================================================
In this paper, we analyze the character networks extracted from three popular television series and explore the relationship between a TV show episode’s character network metrics and its review from IMDB. Character networks are graphs created from the plot of a TV show that represents the interactions of characters in scenes, indicating the presence of a connection between them. We calculate various network metrics for each episode, such as node degree and graph density, and use these metrics to explore the potential relationship between network metrics and TV series reviews from IMDB. Our results show that certain network metrics of character interactions in episodes have a strong correlation with the review score of TV series. Our research aims to provide more quantitative information that can help TV producers understand how to adjust the character dynamics of future episodes to appeal to their audience. By understanding the impact of character interactions on audience engagement and enjoyment, producers can make informed decisions about the development of their shows.
computational linguistics, character networks, network analysis, TV series
§ INTRODUCTION
People only have so much time. When we come home from work or school, we have our priorities; we do chores, send emails, or catch up on our favorite TV shows. For producers and TV channels, this time is imperative. The number of viewers on a show can vastly differ based on the time a show airs, and there are only so many slots available; ideal “prime times” like Tuesday at 9 pm can only be occupied by so many shows. Additionally, viewers only have so many shows they can be invested in at the same time. For the 122.4 million households that watch TV, this poses an interesting question. What makes a show good; more so, what makes a show continue to be good? Unlike movies, TV shows consistently need to produce episode after episode, season after season. Sometimes viewership drops, and your favorite 9 o’clock show is canceled and taken off-air, or disappears from your favorite streaming service. But why? For the executives producing a show, the answer is simple: ratings. High ratings mean more seasons, and low ratings mean the show ends. But there’s more to it than that. Quality fluctuates, and many shows are notorious for their lackluster attempts at good content in later seasons. What causes a great show, loved and revered in seasons one through three, to suddenly become an insult to its fans, its ratings plummeting in the next four seasons? To answer this question, we need to look at what makes a show good in the first place. Ratings are determined by viewer satisfaction, so what did I see in a show that made me want to invest my time in the first place? Is that related to why I don’t enjoy it now?
The answer could be something that the viewers themselves might not even know. In this paper, we attempt to understand the psychology behind these ratings through character networks, and the different interactions between characters. More specifically, we try to find a correlation between the reviews of TV show episodes and the character interactions. We use different metrics of network analysis to quantitatively answer the question: do character interactions affect ratings in TV series?
Network analysis is described as the “set of integrated techniques to depict relations among actors and to analyze the social structures that emerge from the recurrence of these relations”. We use network analysis to analyze graphs. A graph is a collection of nodes (vertices) along with identified pairs of nodes connected with edges. This paper applies network analysis to specific graphs called character networks, where character networks are defined as graphs that represent the interactions between characters in a fictional setting. Different methodologies to observe the behavior of a network and perform network analysis are known as network metrics. We can thus transform our previous question into: for a given TV show, is there a connection between a network metric and the popularity of an episode?
§ DATASET
Our dataset is composed of character networks created by Bost. et. al. 2016 from three popular TV series. To capture a dynamic view of the character interactions, every episode’s character network is constructed by creating multiple successive static character networks.
A single-character network, also described as a segment graph or temporal graph, is defined as the interactions between characters over a period of ten scenes. Each episode contains multiple scenes, which means it can be split into multiple segment graphs. For the TV series Game of Thrones episode 1, 31 segment graphs are created from 310 scenes. Each graph of this episode contains between 14 and 17 nodes and 15 to 18 edges.
A node is defined as any character that speaks within this ten-scene window. The edges are thus defined as any conversational interaction between these nodes, where an undirected edge E_i, j is drawn between any two characters N_i and N_j who speak directly to each other. Every edge E_i, j additionally carries a weight W_i, j representing the total conversation time between N_i and N_j, expressed in seconds. Each character network may contain many groups of nodes that are connected to each other, but not necessarily the rest of the graph. Each group is denoted as a cluster.
Figure <ref> presents the character network episode graph from the TV show Game of Thrones season 1, episode 1. This character network is an undirected segment graph composed of four clusters. The first cluster, situated in the top left corner, describes a fully connected cluster of three characters. N_1 (Jaime Lannister) and N_2 (Tyrion Lannister) speak to each other, creating an undirected edge. N_2 is also directly connected to N_3 (Ros). The edges E_1, 2 and E_2, 3 have a similar weight, denoted by a similar thickness on the diagram. However, the edge between N_1 and N_3 has a smaller weight, shown as a thinner line. Weights are compared against all other clusters, with the average weight in some clusters shown as higher than in other clusters. For example, the average weight between the cluster containing N_1, N_2, and N_3 is higher than the cluster in the lower right corner of the diagram. This cluster includes N_4 (Jon Snow) speaking to N_5 (Theon Greyjoy). The edge E_4, 5 is similar in weight to E_2, 3, showing that the pairs of characters spoke for similar periods of time during these ten scenes. N_5 additionally spoke to N_6 (Robb Stark), connecting three nodes in a chain. N_6 (Robb Stark) and N_4 (Jon Snow) do not have a direct edge connecting them, as they did not directly speak to each other.
Our dataset is extracted from three popular TV series: Breaking Bad (seasons 1-2), Game of Thrones (seasons 1-5), and House of Cards (seasons 1-2).
We additionally use the reviews of these shows from IMDb, an online database of information related to films and television series. Each review shows the average rating of an episode, represented by a decimal number between 1 and 10 with higher numbers representing more favorable or more liked episodes. IMDb's reviews are formed by aggregating individual reviews from registered users. Users are allowed to cast one vote per episode.
It is important to note that IMDb ratings for TV show episodes reflect the overall quality of the episode, not just the character interactions and plot. Other factors such as special effects, guest stars, and cinematography may also influence the score given by viewers. It is possible that these additional elements contribute to the overall enjoyment of the episode and are therefore included in the rating.
Figure <ref> shows the IMDb reviews of Season 1 of Game of Thrones. The x-axis is the name of the episode, while the y-axis is the IMDb review of the episode.
§ METHODS
We seek to establish a relationship between IMDb ratings and network indicators for a given TV series. Each temporal graph corresponds to ten scenes from an episode, whereas each IMDb review corresponds to one episode. Thus, to study the graph metrics for each episode, we first aggregate all segment graphs in one episode into one single weighted episode graph. Then, we study metrics on the aggregated episode graph.
§.§ Segment Graph Aggregation
The aggregation process works by iterating through edges in all segment graphs for a given episode. Edges connecting the same nodes will aggregate by summing their weights across all graphs. That is to say, if our episode contains 10 temporal graphs with existing as an edge in graphs 1, 4, 7, and 10, the weight W_i,j in our episode graph of edge E_i,j will be This weight reflects the total conversation time in seconds between two characters in the episode. Figure <ref> shows the aggregated character network for Game of Thrones season 1, episode 1.
W_i,j = W_i,j^1 + W_i,j^4 + W_i,j^7 + W_i,j^10
§.§ Network Metrics
We seek to establish a relationship between IMDb ratings and network indicators for a given TV series. Each temporal graph corresponds to ten scenes from an episode, whereas each IMDb review corresponds to one episode. Thus, to study the graph metrics for each episode, we first aggregate all segment graphs in one episode into one single weighted episode graph. Then, we study metrics on the aggregated episode graph.
§.§.§ Active Nodes
Active nodes are the total number of nodes in a graph with at least one edge connecting them to another node. The number of active nodes represents the number of characters that speak during an episode. We use the number of active nodes in an episode graph to describe the complexity of the episode plot in this study.
§.§.§ Network Density
A graph's density is the ratio between the number of actual edges that exist in a graph to the maximum number of edges that the graph can have. It indicates how many of a graph's potential connections are real connections. We define density as the following, where n represents the number of nodes and m represents the number of edges.
D(G) = TotalEdges/MaxPossibleEdges = 2 * m/n(n - 1)
Figure <ref> shows the time series for the number of active nodes and the network density for different segment graphs from Game of Thrones episode 1.
The density of the graph increases as more characters converse with one another. Characters in sparse graphs are often still connected, but the conversations are between two characters and not necessarily talk with many other characters. We use network density to measure the level of character interactions in each TV episode.
§.§.§ Node Strength
A node’s strength is defined as the sum of the strengths, or weights, of the node’s edges. Since the edge weight in our graph indicates the time spent conversing between two characters, the strength of a character is the total amount of time spent conversing with all other characters.
Figure <ref> shows the time series of characters with top 5 highest graph node strength from the Game of Thrones episode 1 graph.
The maximum value of node strengths describes the total conversation time of the character who talked most with other characters during this episode. We use this metric to describe the exposure of the most active character in the episode.
§.§.§ Network Efficiency
The efficiency of a graph refers to how efficiently nodes can reach each other. It is also called communication efficiency. The efficiency of a pair of nodes in a graph is inversely proportional to their shortest distances: to find the efficiency of nodes N_i and N_j, we would define it as 1/D_i,j where D_i,j is defined as the shortest distance between nodes N_i and N_j. The global efficiency of a graph is then defined as the average over the pairwise efficiencies.
E(G) = 1/n(n - 1)1/Σ_i≠ jD_i,j
In this study, the TV episode graphs are usually separated into separate subgraphs, often without any connections between these subgraphs. To more accurately represent these subgraphs, we use local efficiency, calculated from the average value of a subgraph’s global efficiency.
§.§.§ Network Transitivity
Network transitivity is the overall probability for the network to have interconnected adjacent nodes, revealing the existence of tightly connected communities. It refers to the extent to which the relation that relates two nodes in a network that are connected by an edge is transitive. The network transitivity is the fraction of all possible triangles present in G. It is computed as dividing total number of node triangles in the graphs by the total number of “triads” (two edges with a shared vertex)
T(G) = 3# of triangles/# of triads
Network transitivity is widely used to examine the level of clustering in social network analysis. In the content of character networks, high transitivity means that the network contains communities or groups of characters that are closely interacting with each other.
We use network transitivity to measure the extent of the complex relationship between characters, and their relation to the plot.
§.§.§ Degree Centrality
In a connected graph, centrality is used to measure the importance of various nodes in a graph. In this study, we use various centrality metrics to measure the conversational interaction patterns of different characters in each TV episode.
Degree centrality is the simplest measure of centrality. The degree of a node is defined as the number of edges or connections that a node has to other nodes. In the context of character networks in TV shows, the degree of a character represents the number of other characters that they interact with in a given episode. Figure 6 illustrates the time series of characters with the top 5 highest degrees in the character network from the first episode of Game of Thrones.
A node with a higher degree than the rest of the nodes may be considered a central or pivotal character, as they have a large number of interactions with other characters. As our episode graph is aggregated from multiple segment graphs, the large number of interactions can be spread over different scenes, or concentrated in a few scenes involving many characters.
In this research, we will use the distribution of node degrees, including the maximum value and standard deviation, to describe the patterns of conversational interactions among the characters in a given episode.
§.§.§ Closeness Centrality and Harmonic Centrality
The closeness centrality of a node in a connected graph, which is determined as the reciprocal of the sum of the shortest distances between the node and all other nodes in the graph, is a measure of centrality in a network. A node with a higher closeness centrality value is closer to all other nodes.
C(u) = n - 1/Σ_v≠ uD_u,v
However, in our study, the TV episode graph is not necessarily a connected graph, as there can be some characters forming subgroups that do not interact with characters outside their subgroup. Therefore, we use harmonic centrality instead of closeness centrality to allow disconnected networks in our research. The harmonic centrality of a node u is defined as the sum of reciprocals of the shortest path distances from all other nodes to node u. Two nodes that are unreachable to each other has a distance of infinity.
H(u) = Σ_v≠ u1/D_v,u
In our study, we use harmonic centralization distribution (max value, standard deviation) to measure the extent to which the plot of the TV episode is concentrated on a single actor or a group of actors.
§.§.§ Eigenvector Centrality
While degree centrality measures the number of connections a node has, it does not take into account the influence of a node's neighbors on its own importance. To address this, we can use eigenvector centrality, which considers the importance of a node's neighbors in determining the node's overall importance.
A node with a high eigenvector score is connected to many other nodes with high scores, indicating that it is connected to influential characters. In the context of character networks in TV shows, a character with a high eigenvector centrality is likely to be connected to characters who are also considered influential within the network. This can be useful in understanding the importance of a character within the context of the overall plot and character relationships.
§ RESULTS
In this study, we calculated network metrics for three popular TV series: Game of Thrones, House of Cards, and Breaking Bad.
§.§ Network Metrics
We studied 22-26 episodes from the first three seasons of each show and calculated various network metrics for each episode. These metrics include density, efficiency, and transitivity, as well as node-level metrics such as degree, harmonic closeness centrality, and eigenvector closeness centrality.
For each metric, we also calculated the maximum and standard deviation values for each episode's character network. The results of these calculations are shown in Appendix Table <ref>, <ref>, and <ref> , with each row representing a single episode and the first two columns indicating the episode number and review. The remaining columns display the various network metrics for that episode.
§.§ Correlation between network metrics and episode reviews
To determine whether there is a relationship between the network metrics we calculated and the IMDB reviews, we used correlation analysis. Specifically, we used the Spearman correlation method, as it is well-suited for analyzing ordinal data such as TV episode review scores. While the absolute value of a review score (e.g. 8.7) may not provide much insight on its own, it is generally accepted that an episode with a higher review score is of higher quality than one with a lower score. As such, the review scores can be considered ordinal, with higher values indicating better quality. By using the Spearman correlation method, we were able to examine the relationship between the network metrics and the episode reviews and determine if there is a correlation between them.
Table <ref> shows the results of the Spearman correlation analysis for the TV series Game of Thrones, examining the relationship between the network metrics and episode reviews. The analysis found two significant correlations: a negative correlation between the number of active nodes and episode review scores, and a positive correlation between the standard deviation of eigenvector centrality and episode review scores.
Table <ref> shows the results of the Spearman correlation analysis for the TV series House of Cards, examining the relationship between the network metrics and episode reviews. The table indicates that there are six significant correlations between the two variables. However, we note that two of these correlations may be influenced by influential outliers, as visualized in Figure <ref>. After accounting for these outliers, the analysis found four significant correlations: a negative correlation between network efficiency and episode review, a negative correlation between maximum node degree and episode review, a negative correlation between node degree standard deviation and episode review, and a negative correlation between maximum harmonic centrality and episode review.
Table <ref> shows the Spearman correlation between episode network metrics and episode reviews for the TV series Breaking Bad. One significant correlation between the network metrics and episode reviews is a positive correlation between network transitivity and episode review.
§ DISCUSSION
The results of our research suggest that different character network structures may have different impacts on the quality of TV episodes. For example, the negative correlation between the number of active nodes and episode review scores in Game of Thrones may indicate that having too many characters in an episode can be overwhelming for viewers. On the other hand, for Breaking Bad, the positive correlation between the standard deviation of eigenvector centrality and episode review scores suggests that focusing on a smaller number of main characters is preferred over a wider focus on many different characters in a single episode.
House of Cards had four correlations. The negative correlation between network efficiency and episode review implies that nodes are not in tight-knit groups - for example, character A might talk to character B who might talk to character C, but they might not talk in a group together. It also found a negative correlation between maximum harmonic centrality and episode review, implying again that characters have isolated conversations with one another rather than engaging in group discussions.
Breaking Bad had a single positive correlation between network transitivity and episode review. This implies that episodes with tightly connected groups of characters tend to be preferred by viewers.
It is important to note that a limitation of our study is that we only have the reviews for the episode as a whole, and not just for the character networks/dynamics. This means that many other factors such as cinematography, script writing, and plot, are all factored into a single review. Additionally, the placement and release of the episode are factors. For example, the finale of a series might be highly anticipated and receive a higher rating, or a guest star might appear in a particular episode and contribute to its score. Because the overall score of an episode represents all of these details together, we do not have the most accurate data to find a correlation between the character network and the metrics themselves.
Our study is also limited to the types of metrics we tested. Though we tested many types, there are still many more network metrics that could potentially find greater correlations with the data.
§ CONCLUSION
This research aims to investigate the potential relationship between character interactions in an episode of a TV series and the review of that episode. We hypothesize that certain features of the TV series may attract viewers and lead to positive reviews if these features are present in the episodes. To capture some of these attractive features, we use character network analysis to analyze the interactions between characters in three well-known TV series.
To do this, we construct character networks representing the interactions between characters in 74 episode graphs from the three TV shows. We then use network metrics to describe the characteristics of the conversations between characters and apply Spearman's rank correlation test to identify the metrics that are statistically significant.
The results of this study, shown in Table <ref>, indicates that there is a statistically significant correlation between character network metrics and TV show reviews. However, the specific network metrics that show significant correlation vary between the three series, with no common significant metric found across all three.
In summary, our research suggests that character networks can have an impact on TV show review scores. By analyzing the interactions between characters in TV series episodes using network metrics, we were able to identify statistically significant correlations between these metrics and review scores. These findings may be useful for TV producers and writers as they consider how to structure their shows and maintain audience engagement. While character networks are not the only factor that determines a show's success, they do play a role in audience enjoyment and should be carefully considered in the production of future seasons.
00
b1 X. Bost, V. Labatut, S. Gueye and G. Linares, “Narrative smoothing: Dynamic conversational network for the analysis of TV series plots,” in 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), pp. 1111-1118.
b2 X. Bost, V. Labatut, S. Gueye and G. Linares, “TV Series - Networks of characters (Version 7),” in figshare. https://doi.org/10.6084/m9.figshare.2199646.v7.
b3 C. -Y. Weng, W. -T. Chu and J. -L. Wu, “Movie Analysis Based on Roles' Social Network,” in 2007 IEEE International Conference on Multimedia and Expo, 2007, pp. 1403-1406.
b4 D. Elson, N. Dames and K. McKeown “Extracting Social Networks from Literary Fiction,” in ACL 2010, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, July 11-16, 2010, Uppsala, Sweden. pages 138-147.
b5 A. Agarwal, A. Corvalan, J Jensen and O. Rambow “Social network analysis of alice in wonderland,” In Proceedings of the NAACL-HLT 2012 Workshop on Computational Linguistics for Literature, pages 88–96.
b6 A. Ricardo, J. Miro-Julia, and F. Rosselló., “Marvel Universe looks almost like a real social network,” arXiv preprint cond-mat/0202174 (2002).
b7 X. Bost, V. Labatut and G. Linares, “Serial Speakers: a Dataset of TV Series,” in 12th International Conference on Language Resources and Evaluation (LREC 2020),May 2020, Marseille, France p.4256-4264
b8 NetworkX documentation. https://networkx.org
§ APPENDIX
[htbp]
Network metrics for Game of Thrones episodes.
Episode Review Density Efficiency Transitivity Strength_max Strength_std Degree_max Degree_std Harmonic_max Harmonic_std Eigen_max Eigen_std
1 9.001 0.690 0.612 0.403 91.350 21.330 10 2.510 13.833 4.248 0.627 0.157
1 9.001 0.690 0.612 0.403 91.350 21.330 10 2.510 13.833 4.248 0.627 0.157
2 8.701 1.083 0.598 0.371 108.371 27.805 10 2.293 14.667 3.516 0.651 0.169
3 8.601 0.559 0.627 0.391 123.114 26.931 14 2.662 17.500 3.073 0.600 0.136
4 8.701 0.510 0.470 0.332 153.735 32.168 12 2.413 15.500 3.117 0.693 0.140
5 9.001 0.817 0.388 0.304 241.514 47.983 12 2.228 19.283 3.789 0.612 0.146
6 9.101 0.808 0.378 0.293 146.025 37.887 13 2.103 18.067 4.035 0.702 0.149
7 9.201 0.832 0.437 0.340 156.197 34.976 10 1.636 10.500 2.189 0.606 0.152
8 9.001 0.312 0.565 0.418 58.668 16.305 8 1.885 11.667 2.303 0.702 0.135
9 9.601 0.554 0.581 0.455 233.417 39.914 8 1.850 11.500 2.271 0.704 0.137
10 9.501 0.374 0.346 0.308 80.360 19.776 9 1.756 14.000 3.544 0.707 0.140
11 8.701 0.539 0.364 0.345 142.963 31.816 8 1.654 14.500 3.680 0.649 0.129
12 8.501 0.524 0.344 0.293 189.477 38.741 8 1.578 17.200 4.666 0.705 0.129
13 8.801 0.424 0.399 0.336 214.796 32.493 9 1.592 16.667 4.384 0.698 0.127
14 8.701 0.534 0.481 0.369 118.127 28.484 9 1.862 11.000 2.387 0.628 0.136
15 8.701 0.585 0.409 0.331 145.672 29.029 8 1.753 9.500 1.824 0.688 0.133
16 9.001 1.035 0.512 0.396 171.112 40.708 7 1.482 8.500 1.534 0.666 0.157
17 8.901 0.686 0.291 0.330 107.020 30.333 6 1.355 8.500 1.713 0.694 0.148
18 8.701 0.752 0.301 0.304 223.797 41.606 6 1.296 8.000 1.869 0.698 0.143
19 9.701 0.727 0.337 0.358 104.391 27.663 9 2.206 12.783 3.962 0.652 0.168
20 9.401 0.670 0.249 0.323 116.497 28.533 5 1.140 7.250 1.616 0.696 0.145
21 8.701 0.465 0.352 0.291 189.694 34.213 8 1.487 8.500 1.787 0.705 0.132
22 8.501 0.459 0.401 0.367 109.471 31.511 7 1.340 10.417 2.058 0.548 0.125
[htbp]
Network metrics for House of Cards episodes.
Episode Review Density Efficiency Transitivity Strength_max Strength_std Degree_max Degree_std Harmonic_max Harmonic_std Eigen_max Eigen_std
1 8.601 0.785 0.235 0.096 217.296 42.013 17 3.111 23.000 2.684 0.691 0.156
2 8.501 0.662 0.345 0.153 184.978 33.518 19 3.220 25.000 3.065 0.682 0.142
3 8.301 0.720 0.346 0.109 241.030 42.640 22 3.700 25.833 4.243 0.669 0.144
4 8.201 0.756 0.330 0.170 191.937 36.833 17 3.042 22.333 3.866 0.677 0.147
5 8.401 0.548 0.504 0.246 193.091 34.697 20 3.587 26.667 3.591 0.674 0.145
6 8.501 0.712 0.370 0.174 246.106 46.610 19 3.324 25.333 3.036 0.673 0.150
7 8.101 0.726 0.343 0.173 197.469 39.098 18 3.233 25.500 3.061 0.606 0.141
8 7.701 1.995 0.465 0.421 162.628 44.609 8 1.765 8.500 2.157 0.706 0.213
9 8.501 1.726 0.487 0.301 208.798 47.180 12 3.320 17.000 2.465 0.658 0.164
10 8.701 1.655 0.494 0.256 181.880 44.724 14 3.054 17.000 2.170 0.563 0.156
11 9.001 3.141 0.310 0.173 225.517 55.327 8 1.983 11.500 1.604 0.679 0.185
12 8.501 2.497 0.365 0.210 264.034 63.197 9 2.285 13.833 1.893 0.706 0.205
13 8.801 0.917 0.251 0.152 168.177 33.982 12 2.683 18.667 2.331 0.692 0.163
14 9.501 1.309 0.302 0.240 183.422 40.448 11 2.496 16.833 2.254 0.669 0.167
15 8.301 0.707 0.290 0.145 197.214 36.916 20 3.542 25.500 3.046 0.646 0.144
16 8.401 0.568 0.231 0.221 198.008 33.332 15 2.617 24.167 3.208 0.692 0.141
17 9.001 1.015 0.401 0.266 200.016 50.566 11 2.397 17.917 4.145 0.676 0.167
18 8.301 0.856 0.430 0.218 244.365 47.852 16 3.059 23.167 4.651 0.661 0.156
19 8.301 0.858 0.414 0.188 246.606 45.328 14 2.662 22.667 3.072 0.670 0.152
20 8.401 0.865 0.310 0.210 216.038 40.224 14 2.873 22.167 2.982 0.686 0.152
21 8.501 0.708 0.271 0.195 207.177 38.964 14 2.684 22.500 4.098 0.680 0.153
22 8.901 0.863 0.226 0.219 164.527 44.760 9 2.273 19.500 2.697 0.551 0.154
23 8.301 0.660 0.339 0.251 234.251 44.020 14 2.774 20.417 4.202 0.645 0.144
24 8.701 0.962 0.232 0.170 234.172 47.400 12 2.420 19.917 3.765 0.683 0.154
25 8.801 1.034 0.272 0.194 155.699 31.591 10 2.013 18.000 3.441 0.589 0.150
26 9.501 0.791 0.270 0.172 206.005 39.814 14 2.713 23.000 2.862 0.688 0.156
[htbp]
Network metrics for Breaking Bad episodes.
Episode Review Density Efficiency Transitivity Strength_max Strength_std Degree_max Degree_std Harmonic_max Harmonic_std Eigen_max Eigen_std
1 9.001 3.360 0.492 0.325 371.776 94.610 12 2.787 13.500 2.959 0.689 0.203
2 8.601 18.716 0.613 0.364 461.730 183.342 7 1.982 7.000 0.991 0.691 0.298
3 8.701 9.382 0.381 0.452 461.480 127.831 7 2.184 10.000 1.577 0.703 0.231
4 8.201 1.821 0.542 0.365 241.103 53.644 11 2.531 17.000 2.212 0.679 0.171
5 8.301 1.877 0.315 0.271 236.518 66.897 12 2.632 16.167 3.337 0.648 0.179
6 9.301 2.103 0.522 0.330 405.062 90.092 17 3.409 19.333 2.404 0.694 0.184
7 8.801 2.486 0.594 0.284 336.861 81.209 14 2.780 16.833 2.153 0.699 0.196
8 8.601 4.632 0.401 0.403 222.325 80.904 6 1.870 9.667 1.646 0.642 0.217
9 9.301 9.034 0.691 0.440 252.290 82.642 8 2.270 9.500 1.387 0.642 0.242
10 8.301 2.826 0.576 0.257 210.052 67.400 11 3.024 15.000 2.061 0.524 0.163
11 8.201 2.776 0.271 0.122 301.404 84.678 11 2.524 14.833 1.864 0.687 0.197
12 8.301 3.718 0.307 0.263 372.695 92.825 8 2.087 12.167 1.637 0.679 0.191
13 8.901 3.152 0.411 0.273 359.448 91.064 8 2.137 12.000 2.688 0.641 0.183
14 8.701 2.053 0.421 0.302 303.973 75.250 11 2.374 14.500 3.513 0.642 0.168
15 9.201 3.793 0.311 0.355 301.257 93.928 8 2.272 11.000 2.797 0.617 0.204
16 9.101 6.657 0.519 0.421 508.665 166.112 9 2.381 11.000 1.520 0.703 0.247
17 8.501 2.361 0.460 0.287 213.431 67.919 12 2.833 15.000 3.204 0.573 0.185
18 8.901 1.543 0.304 0.250 226.528 56.459 12 2.550 14.583 3.928 0.601 0.170
19 9.201 3.563 0.483 0.358 317.525 84.817 12 2.749 13.000 2.855 0.592 0.198
20 9.201 2.193 0.582 0.342 249.624 63.993 11 2.421 16.000 2.042 0.662 0.192
21 8.501 7.997 0.214 0.250 186.643 60.607 5 1.401 7.500 1.184 0.611 0.191
22 8.701 5.932 0.440 0.476 264.015 71.972 6 1.710 9.833 1.551 0.611 0.198
23 8.401 4.585 0.296 0.333 276.085 73.943 5 1.100 5.000 1.077 0.700 0.236
24 8.201 4.190 0.234 0.220 289.473 70.073 7 1.708 9.000 2.036 0.695 0.206
25 8.601 4.326 0.167 0.188 174.541 59.538 4 1.219 8.350 1.224 0.650 0.211
26 9.301 10.868 0.212 0.333 207.894 68.776 3 0.809 3.000 0.513 0.707 0.281
|
http://arxiv.org/abs/2307.02895v1
|
20230706095902
|
Dynamics of a discrete-time mixed oligopoly Cournot-type model with three time delays
|
[
"Loredana Camelia Culda",
"Eva Kaslik",
"Mihaela Neamtu"
] |
math.DS
|
[
"math.DS"
] |
Dynamics of a discrete-time mixed oligopoly Cournot-type model with three time delays
Loredana Camelia Culda^1 Eva Kaslik^1,* Mihaela Neamţu^1
^1 West University of Timişoara, Bd. V. Pârvan nr. 4, 300223, Timişoara, Romania
^* Corresponding Author: [email protected]
===================================================================================================================================
The paper analyzes the interactions among one public firm and n private firms on the market, in the framework of a discrete-time Cournot game with time delay. The production of the public firm is influenced by previous output levels of private firms. The productions of private companies are influenced by the past productions of the public company, as well as by the previous productions of the other private companies.
The associated nonlinear system admits two equilibrium points: the positive one and the boundary equilibrium. After the stability analysis, we obtained that the boundary equilibrium point is a saddle
point. If there is no delay, for the positive equilibrium point we have determined the stability region. Then, for different particular cases of delays, we found the conditions for which the positive equilibrium is asymptotically stable. The flip and Neimark-Sacker bifurcations are investigated. In addition, numerous numerical examples are performed to reveal the complex dynamic behavior of the system.
§ INTRODUCTION
Game theory is the field of study that focuses on analyzing the interactions between multiple individuals or teams in a game, given certain conditions, in order to determine the optimal strategies for each party. In mathematical economics, oligopoly theory is a topic of interest and the earliest branch of chaotic dynamics, which is based on research on chaos theory and bifurcation theory using different dynamical systems, has found extensive applications. The literature has taken into account a variety of oligopoly models, including those with or without product differentiation and those with one or more products. Time delay models are used to reflect actual conditions when there are delays in the decision-making processes, lead time, information implementation, or execution time <cit.>.
In a discrete or continuous time setting, dynamic duopoly games have been examined in the context of quantity-setting firms. Players with homogenous or heterogeneous characteristics have the following alternatives for their strategies: naive, adaptive, or bounded rational <cit.>. In the field of economics, expectations pertains to the predictions or perspectives that individuals in charge of decision-making have regarding forthcoming prices, sales, incomes, or other relevant factors. According to the adaptive expectations hypothesis, the current expectations are a combination of past expectations.
The practical application of a game like this closely mirrors economic reality, and it is commonly used in oligopolies. In <cit.>, which relates to dynamic Cournot oligopoly games, there are three concurrent firms with bounded rationality that are all based on the utility CES function. Recently, a Cournot-Theocharis oligopoly model with a single time delay has been investigated in <cit.>, considering that firms make decisions based on adaptive expectations and assuming that information on competitors is only available after a time lag. In the numerical analysis of the equilibria for the corresponding nonlinear discrete-time mathematical model, complex behavior is found.
The dynamics of a mixed triopoly game, in which a public firm competes against two private firms, are examined in <cit.>. The equilibrium points are identified and their local stability is examined, taking into account both quantity and price competition.
In <cit.> the stability of the Nash equilibrium is examined for a dynamic model with n firms which compete in an isoelastic demand setting with non-unitary elasticity framework. Additionally, it is noted in <cit.> that privately owned firms may run into financial difficulties, which could force those firms to be nationalized, while state-controlled public firms are important to their market competitors. In <cit.>, the interactions of one public firm and n private firms on the market are considered and the analysis of the corresponding discrete-time Cournot game with two time delays is discussed.
Also, in <cit.>, it has been observed that players tend to choose strategies that differ from those found in the Tullock Nash equilibrium. In <cit.>, the authors studied the interactions between fiscal and monetary authorities in a monetary union during a debt stabilization process, assuming that policy authorities do not coordinate and cannot perfectly predict each other's decisions.
The present study aims to advance earlier research by studying the impact of the number of private firms in the market on the stability of the equilibrium when information delays are taken into account. This is motivated by mixed competition as well as the multiple delays in the decision-making process.
To be more explicit, we take into account one public firm and n private firms that are engaged in the production of differentiated products within the context of a dynamic oligopoly game. The public firm decides on its output based on the expected marginal payoff, or the social surplus, while taking into account the historical production levels of the private firms <cit.>. Utilizing reaction functions and previous output from the public firm, the outputs of the private firms are determined.
The major result is the characterization of the stability of the Nash equilibrium with respect to the quantity of private firms, the level of product differentiation, the adjustment parameter, and three time delays.
This paper's structure is outlined in the following. In Section <ref>, the mathematical model is given, and two equilibrium points are identified: the positive equilibrium and the boundary equilibrium.
The local stability analysis for the boundary and the positive equilibrium points is covered in Section <ref>. The theoretical results are exemplified using numerical simulations in Section <ref>, which is then followed by conclusions and a discussion of future research options.
§ MATHEMATICAL MODEL
Let q_0 represent the output of the public firm and q_i, i=1,n, stand for the output of the private firm i. The retail price for the public firm is p_0 and p_i, i=1,n, for the private firm i.
The aim of the representative consumer is to maximize the following function <cit.>:
U(q_0,q_1,...,q_n)-∑_i=0^n p_iq_i,
where <cit.>:
U(q_0,q_1,...,q_n)=a∑_i=0^n q_i-b2(∑_i=0^nq_i^2+δ∑_i=0^n∑_i≠ j q_iq_j),
with a, b real positive numbers and δ∈(0,1) the degree of product differentiation.
The maximization problem of (<ref>) leads to:
p_i=a-bq_i-bδ∑_j=0, j≠ i^nq_j, i=0,n.
The objective of the public firm is to maximize the social surplus, while the purpose of the private firm is to maximize the profit function. The profit function of firm i is given by:
P_i=(p_i-c_i)q_i, i=0,n,
with c_i the marginal cost of firm i, and the social surplus is <cit.>:
SW(q_0,q_1,..,q_n)=a∑_i=0^nq_i-b2(∑_i=0^nq_i^2+δ∑_i=0^n∑_i≠ j q_iq_j)-∑_i=0^np_iq_i+∑_i=0^nP_i.
We consider the same marginal costs for all the private firms: c_1=c_2=...=c_n=c.
The maximization problems lead to:
∂ SW∂ q_0(q_0,q_1,...,q_n)=0,
∂ P_i∂ q_i(q_0,q_1,...,q_n)=0, i=1,n,
or equivalently
a-c_0-bq_0-bδ∑_i=1^nq_i=0.
with a>c_0≥ c, and
p_i-c-bq_i=0, i=1,n.
From (<ref>), (<ref>) and (<ref>) we have:
q_0=a_0b-δ∑_i=1^nq_i,
q_i=a_12b-δ2∑_j=0,j≠ i^nq_j, i=1,n,
where a_0=a-c_0>0 and a_1=a-c>0.
As the public firm has bounded rationality and the private firm i, i=1,n, is naive, the dynamical equations for the outputs are given by <cit.>:
q_0(t+1)=q_0(t)+α q_0(t)[a_0-bq_0(t)-bδ∑_i=1^n q_i(t)],
where α is the positive adjustment parameter,
q_j(t+1)=a_12b-δ2∑_i=0,i≠ j^n q_i(t), j=1,n.
As in <cit.>, we consider that the output of the public firm is influenced by the past output levels of the private firms (at time t-τ_1, τ_1>0). Moreover, as in <cit.>, the productions of private firms are set up according to the past productions (at time t-τ_0, τ_0>0) of the public firm. Furthermore, in the present paper we also adjust the productions of private firms with the past
productions (at time t-τ_2, τ_2>0) of the other private firms.
Therefore, in this paper, we investigate the following nonlinear discrete-time mathematical model with time delays:
q_0(t+1)=q_0(t)+α q_0(t)[a_0-bq_0(t)-bδ∑_i=1^n q_i(t-τ_1)]
q_j(t+1)=a_12b-δ2q_0(t-τ_0)-δ2∑_i=1,i≠ j^n q_i(t-τ_2) , j=1,n.
The equilibrium points of the discrete dynamical system (<ref>) are:
E_0=(0, q^⋆,q^⋆,...,q^⋆), where q^⋆ =a_1b[2+(n-1)δ]
and
E_+=(q_0^⋆,q_1^⋆,q_1^⋆...,q_1^⋆), where q_0^⋆=[2+(n-1)δ]a_0- n δ a_1b[2+(n-1)δ-nδ^2] ,
q_1^⋆=a_1-δ a_0b[2+(n-1)δ-nδ^2].
Due to the fact that δ∈(0,1), the positivity of the equilibrium E_+ is equivalent to the following assumptions:
(A.1) [2+(n-1)δ]a_0>nδ a_1 ,
(A.2) a_1>δ a_0 .
§ LOCAL STABILITY AND BIFURCATION ANALYSIS
The liniarized system at one of the equilibrium points E=(q_0^e,q_1^e,q_1^e,...,q_1^e)∈{E_0,E_+} is of the form:
y(t+1)=A^ey(t)-B_0 y(t-τ_0)-B_1^ey(t-τ_1)-B_2 y(t-τ_2)
where
y(t)=[
[ q_0(t)-q_0^e q_1(t)-q_1^e … q_n(t)-q_1^e ]]^T
and the matrices A^e,B_0,B_1^e,B_2 are given below:
A^e=[ 1+α(a_0-2bq_0^e-nbδ q_1^e) 0 … 0; 0 0 … 0; ⋮ ⋮ ⋱ ⋮; 0 0 … 0 ] , B_0=[ 0 0 … 0; δ/2 0 … 0; ⋮ ⋮ ⋱ ⋮; δ/2 0 … 0; ]
B_1^e=[ 0 bαδ q_0^e … bαδ q_0^e; 0 0 … 0; ⋮ ⋮ ⋱ ⋮; 0 0 … 0; ] , B_2=[ 0 -δ/2 … -δ/2; -δ/2 0 … -δ/2; ⋮ ⋮ ⋱ ⋮; -δ/2 -δ/2 … 0 ]
The characteristic equation of system (<ref>) can be obtained using the 𝒵-transform method, and is given as follows:
(A^e-B_0λ^-τ_0-B_1^eλ^-τ_1-B_2λ^-τ_2-λ I)=0,
or equivalently:
(λ-δ2λ^-τ_2)^n-1[nbα q_0^eδ^22λ^-τ_0-τ_1-(λ-1-α(a_0-2bq_0^e-nbδ q_1^e))(λ+(n-1)δ2λ^-τ_2)]=0.
In what follows, we analyze each of the equilibrium points E_0 and E_+.
§.§ The boundary equilibrium E_0
If assumption (A.1) holds, the boundary equilibrium point E_0 is a saddle
point.
At the boundary equilibrium point E_0, as q_0^e=0 and q_1^e=q^⋆, the characteristic equation (<ref>) reduces to:
(λ-δ2λ^-τ_2)^n-1(λ-1-α(2+(n-1)δ)a_0-nδ a_1/2+(n-1)δ)(λ+(n-1)δ2λ^-τ_2)=0.
We notice that one root of (<ref>) is
λ_1=1+(2+(n-1)δ)a_0-nδ a_12+(n-1)δ>1, due to assumption (A.1).
On the other hand, we can notice that the characteristic equation (<ref>) also admits some roots inside the unit disk, which satisfy:
λ^τ_2+1=δ/2<1.
In conclusion, the equilibrium E_0 is a
saddle point of system (<ref>).
§.§ The positive equilibrium E_+
As in this case q_0^e=q_0^⋆ and q_1^e=q_1^⋆,
the characteristic equation (<ref>) becomes
(λ-δ2λ^-τ_2)^n-1[ε_0(ε_1+1)λ^-τ_0-τ_1-(λ+ε_1)(λ+ε_2λ^-τ_2)]=0,
where
ε_0=nδ^22>0 , ε_1+1=α[2+(n-1)δ]a_0- n δ a_12+(n-1)δ-nδ^2>0 and ε_2=(n-1)δ2>0.
Some of the roots of (<ref>) are given by
λ^τ_2+1=δ/2<1,
and hence, these roots belong to the open unit disk. Therefore, the stability of the equilibrium point E_+ is determined by the roots of the following reduced equation:
ε_0(ε_1+1)λ^-τ_0-τ_1-(λ+ε_1)(λ+ε_2λ^-τ_2)=0.
In the absence of time delays, based on the Schur-Cohn stability conditions, the following result has been obtained in <cit.> regarding the asymptotic stability of the equilibrium point E_+:
If assumptions (A.1) and (A.2) hold, when τ_0=τ_1=τ_2=0, the equilibrium point E_+ is asymptotically stable if and only if (ε_0,ε_1,ε_2) belong to the delay-free stability region defined by the following inequalities:
ε_2<1 and ε_1<1-ε_2-ε_01-ε_2+ε_0.
This theorem represents a generalization of the result presented in <cit.>, where the case of two private firms (n=2) has been investigated.
If inequalities (<ref>) hold, it follows that ε_1<1 and:
(1-ε_1)(1-ε_2) >(1-1-ε_2-ε_01-ε_2+ε_0)(1-ε_2)
=ε_02(1-ε_2)1-ε_2+ε_0
>ε_0(ε_1+1).
In what follows, we describe two situations related to the time delays, where inequalities (<ref>) provide sufficient conditions for the asymptotic stability of E_+.
Assume that τ_0≥ 0, τ_1≥ 0 and τ_2=0. If the assumptions (A.1) and (A.2) hold and inequalities (<ref>) are satisfied, the equilibrium point E_+ is asymptotically stable.
Theorem <ref> provides that if assumptions (A.1), (A.2) and inequalities (<ref>) hold, the equilibrium E_+ is asymptotically stable for null time delays. Assuming by contradiction that asymptotic stability of the equilibrium point is lost for certain values of the time delays, based on the continuous dependence of the roots of the characteristic equation (<ref>) on τ_0,τ_1, it follows that there exist critical values (τ_0^*,τ_1^*), such that the equation (<ref>) has some roots λ belonging to the unit circle.
The characteristic equation (<ref>) can be rewritten as:
ε_0(ε_1+1)λ^-τ_0-τ_1=(λ+ε_1)(λ+ε_2).
Assuming that λ=e^iθ, with θ∈[0,π], satisfies the above equation for (τ_0,τ_1)=(τ_0^*,τ_1^*), taking the absolute value of both sides of the equation leads to:
|e^iθ+ε_1||e^iθ+ε_2|=ε_0(ε_1+1),
or equivalently:
[2ε_1 cosθ+ε_1^2+1][2ε_2cosθ+ε_2^2+1]=ε_0^2(ε_1+1)^2.
Taking into account inequalities (<ref>), we deduce that ε_1<1.
On the one hand, if ε_1>0, based on Remark <ref>, the following inequalities hold for the left hand side of equation (<ref>):
[2ε_1cosθ+ε_1^2+1][2ε_1cosθ+ε_2^2+1] ≥[-2ε_1+ε_1^2+1][-2ε_2+ε_2^2+1]
=(1-ε_1)^2(1-ε_2)^2
>ε_0^2(ε_1+1)^2 ,
which contradicts equality (<ref>).
On the other hand, if ε_1≤ 0, denoting by P(cosθ) the left hand side of equation (<ref>), it follows that P is a concave quadratic polynomial, and hence, using similar arguments as in the previous computations, we obtain:
P(cosθ)≥min{P(-1),P(1)}>ε_0^2(ε_1+1)^2,
which again, contradicts equality (<ref>).
Consequently, if the assumptions of the theorem hold, the equilibrium point E_+ is asymptotically stable for any time delays τ_0 and τ_1.
Assume that τ_0+τ_1=τ_2. If the assumptions (A.1) and (A.2) hold and inequalities (<ref>) are satisfied, the equilibrium point E_+ is asymptotically stable.
If τ_0+τ_1=τ_2:=τ, the characteristic equation may be written as:
ε_0(ε_1+1)=(λ+ε_1)(λ^τ+1+ε_2).
Let us assume that there exists τ≥ 0 such that this characteristic equation has a root λ=e^iθ, with θ∈(0,π).
Let us consider ρ_1>0, ρ_2>0 and ϕ_1,ϕ_2∈(0,2π) such that:
λ+ε_1=ρ_1 e^iϕ_1
λ^τ+1+ε_2=ρ_2 e^iϕ_2
Hence, the characteristic equation now implies:
ε_0(ε_1+1)=ρ_1ρ_2e^i(ϕ_1+ϕ_2).
and therefore:
ρ_1ρ_2=ε_0(ε_1+1)
ϕ_1+ϕ_2=2π
we obtain:
cosθ+ε_1=ρ_1cosϕ_1
sinθ=ρ_1sinϕ_1
cos(τ+1)θ+ε_2=ρ_2cosϕ_2
sin(τ+1)θ=ρ_2sinϕ_2
From the second equation, we deduce ϕ_1∈(0,π), and hence, ϕ_2=2π-ϕ_1. Moreover, eliminating θ from the previous system, we get:
ρ_1ρ_2=ε_0(ε_1+1)
ρ_1^2-2ε_1ρ_1cosϕ_1+ε_1^2-1=0
ρ_2^2-2ε_2ρ_2cosϕ_1+ε_2^2-1=0
Solving the last two quadratic equations and keeping in mind that ρ_1>0 and ρ_2>0 we have:
ρ_k=ε_kcosϕ_1+√(ε_k^2cos^2ϕ_1+1-ε_k^2), for k∈{1,2}.
Denoting μ=cosϕ_1∈[-1,1] and replacing in the first equation of the above system, we obtain:
h(μ):=[ ε_1μ+√(ε_1^2μ^2+1-ε_1^2)]·[ ε_2μ+√(ε_2^2μ^2+1-ε_2^2)]=ε_0(ε_1+1).
It is easy to check that the function h is monotonous (strictly increasing if ε_1+ε_2>0 and strictly decreasing otherwise), and hence:
h(μ) ≥min{h(-1),h(1)}=min{(1-ε_1)(1-ε_2), (1+ε_1)(1+ε_2)}
=
min{(1-ε_1)(1-ε_2),(1+ε_1) (1+ε_2)}>ε_0(ε_1+1),
where Remark <ref> has been employed.
Therefore, we have arrived at a contradiction, and hence, if inequalities (<ref>) hold, the positive equilibrium is asymptotically stable. However, for certain values of the time delays, the exact stability region may be larger than the delay-independent stability region (<ref>).
If either τ_2=0 or τ_0+τ_1=τ_2, Theorems
<ref> and <ref> reveal that time delays may have a stabilizing effect on E_+. In these two cases, the delay-free stability regions provided by inequalities (<ref>) are in fact, delay-independent stability regions of E_+. These regions have been exemplified in Figure <ref>, for a_0=2 and a_1=2.5. We observe that for a larger number n of private firms, smaller values of δ are needed for the stability of E_+, while slightly larger values of α are permissible.
A flip bifurcation takes place in a neighborhood of the equilibrium point E_+ if and only if:
ε_1=1-ε_2(-1)^τ_2-ε_0(-1)^τ_0+τ_11-ε_2(-1)^τ_2+ε_0(-1)^τ_0+τ_1.
We distinguish four cases presented below:
(i) If τ_0+τ_1 and τ_2 are even, a flip bifurcation takes place in a neighborhood of the equilibrium point E_+ exactly at the boundary of the delay-free stability region given by inequalities (<ref>), i.e. when
ε_1=1-ε_2-ε_01-ε_2+ε_0 .
(ii) If τ_0+τ_1 and τ_2 are odd, a flip bifurcation takes place in a neighborhood of the equilibrium point E_+ if and only if
ε_1=1+ε_2+ε_01+ε_2-ε_0 .
(iii) If τ_0+τ_1 is even and τ_2 is odd a flip bifurcation takes place in a neighborhood of the equilibrium point E_+ if and only if
ε_1=1+ε_2-ε_01+ε_2+ε_0 .
(iv) If τ_0+τ_1 is odd and τ_2 is even, a flip bifurcation takes place in a neighborhood of the equilibrium point E_+ if and only if
ε_1=1-ε_2+ε_01-ε_2-ε_0 .
To study the Neimark-Sacker bifurcation from the equation (<ref>) for λ=e^iθ and τ=τ_0+τ_1, we have that
ε_1=e^iθ H(θ, τ, τ_2, ε_2)-ε_0ε_0 -H(θ, τ, τ_2, ε_2) ,
where H(θ, τ, τ_2, ε_2)=e^iτθ(e^iθ+ε_2e^-iτ_2θ) and if we take the real and imaginary part we obtain the following system:
(H)=sin(τ+1)θ+ε_2sin(τ-τ_2)θ
(H)=cos(τ+1)θ+ε_2cos(τ-τ_2)θ
As ε_1∈ℝ, taking the imaginary part in equation (<ref>) leads to:
ε_0cos(τ+3/2)θ-ε_0ε_2cos(τ-τ_2+1/2)θ=cosθ/2[1+ε_2^2+2ε_2cos(τ_2+1)θ]
and taking the real part in equation (<ref>) gives:
ε_1=2cosθ/2[ε_0cos(τ+3/2)θ+ε_0ε_2cos(τ-τ_2+1/2)θ]-cosθ (1+ε_2^2+2ε_2cos(τ_2+1)θ)-ε_2^2ε_0^2+ε_2^2+2ε_2cos(τ_2+1)θ-2ε_0cos(τ+1)θ-2ε_0ε_2cos(τ-τ_2)θ+1
In particular case, τ_2=τ we obtain:
ε_1=2cosθ/2[ε_0cos(τ+3/2)θ+ε_0ε_2cos1/2θ]-cosθ (1+ε_2^2+2ε_2cos(τ+1)θ)-ε_2^2ε_0^2+ε_2^2+2ε_2cos(τ+1)θ-2ε_0cos(τ+1)θ-2ε_0ε_2+1
and if τ_2=0 it is obtain the equation:
ε_1=2cosθ/2[ε_0cos(τ+3/2)θ+ε_0ε_2cos(τ+1/2)θ]-cosθ (1+ε_2^2+2ε_2cos(τ+1)θ)-ε_2^2ε_0^2+ε_2^2+2ε_2cosθ-2ε_0cos(τ+1)θ-2ε_0ε_2cosτθ+1
§.§ Stability analysis in the absence of the public firm
In the absence of public firm, q_0(t)=0, the nonlinear discrete-time mathematical model with delay (<ref>) is reduced to:
q_j(t+1)=a_12b-δ2∑_i=1,i≠ j^n q_i(t-τ_2) , j=1,n.
where the equilibrium points are E_0^r=(q_0^⋆, q_0^⋆, ..., q_0^⋆) and E_+^r=(q_1^⋆, q_1^⋆, ..., q_1^⋆).
The characteristic equation is given as follows:
(λ-δ2λ^-τ_2)^n-1(λ+(n-1)δ2λ^-τ_2)=0.
§ NUMERICAL EXAMPLES
To showcase our theoretical results, we examine a scenario with 4 private firms and 1 public firm, with the following fixed parameters: a_0=2, a_1=2.5, b=1, and δ=0.4. Under these parameter values, the positive equilibrium point is calculated to be:
E_+=(0.9375,0.664,0.664,0.664,0.664).
By referencing inequalities (<ref>), we conclude that the positive equilibrium E_+ is asymptotically stable, regardless of the chosen values of the time delays τ_0, τ_1, and τ_2, provided that α<α^⋆=1.185. From the observations in Remark <ref>, it can be deduced that in the event where τ_0 + τ_1 and τ_2 are both even, a flip bifurcation will occur in a vicinity of the positive equilibrium at the critical value of the parameter α, denoted by α^⋆. This is consistent with the bifurcation diagrams shown in Figures <ref> and <ref> displayed with respect to the parameter α for the special cases τ_0=τ_1=2,τ_2=10 and τ_0=2, τ_1=4,τ_2=8 respectively.
In the first case, the flip bifurcation is followed by a period-doubling bifurcation at approximately α = 1.55. Conversely, in Figure <ref>, the flip bifurcation is followed by a period-doubling bifurcation at around α = 1.38 and a Neimark-Sacker bifurcation of the period-4 point at approximately α = 1.51.
In contrast with the previous two examples, the bifurcation diagrams for the cases τ_0=5, τ_1=3, τ_2=3 and τ_0=3, τ_1=5, τ_2=5, displayed in Figures <ref> and <ref>, show that in these cases, the stability of the positive equilibrium E_+ is lost due to a Neimark-Sacker bifurcation. When τ_0=5, τ_1=3, τ_2=3 (see Figure <ref>), a Neimark-Sacker bifurcation takes place at α≃ 1.43, and a stable limit cycle is formed. However, for α>1.6, we observe the occurrence of a chaotic attractor. On the other hand, when τ_0=3, τ_1=5, τ_2=5 (see Figure <ref>), we can only observe a Neimark-Sacker bifurcation that occurs for α≃ 1.28 and the resulting stable limit cycles persist for α>1.28 (the largest Lyapunov exponent remains constantly null).
As a final example, as indicated by Remark <ref>, if τ_0+τ_1 is even and τ_2 is odd, the positive equilibrium E_+ loses its stability at α=α^⋆. This is demonstrated in the bifurcation diagram from Figure <ref> for the case of τ_0=9, τ_1=7, and τ_2=5.
The flip bifurcation at α≃ 1.49 is followed by a Neimark-Sacker bifurcation of the period-2 point for α≃ 1.65. Again, for sufficiently large values of the parameter α, chaos arises, emphasized by the positive values of the largest Lyapunov exponent.
The phase portraits displayed in Figures <ref>, <ref>, <ref> and <ref> are consistent with the bifurcation diagrams, which illustrate the various dynamic regimes ranging from period doubling for small α to the appearance of chaos when α is sufficiently large.
§ CONCLUSIONS
The dynamics of an oligopoly game with product differentiation, in which n private firms and a state-owned public firm coexist, have been examined in the current work. Two equilibrium points for the associated discrete-time mathematical model with three time delays have been established, and the local stability has been investigated.
The positive equilibrium E_+ is asymptotically stable when there is no delay and certain conditions are fulfilled. Additionally, we have identified the necessary conditions that ensure E_+ is asymptotically stable, irrespective of time delays.
We have demonstrated that in certain cases, the positive equilibrium point E_+ may be stabilized by the time delays. We have seen that as there are more private firms, the stability of the positive equilibrium requires smaller values of the degree of product differentiation, which is connected with slightly greater values of the adjustment parameter. When the time delays are large enough, numerical simulations show complicated dynamic behavior as well as the presence of chaos. Our findings emphasize the impact of different sets of time delays on the system's dynamics.
Our results generalize several findings from <cit.>, and they can be extended in the following ways: obtaining a thorough understanding of the Neimark-Sacker bifurcations occurring in the neighborhood of E_+; comprehending the potential paths leading to chaotic behavior in terms of the quantity of private firms and the time delays; and analyzing a mathematical model resembling this one in which the network of n private firms does not have all-to-all connection.
|
http://arxiv.org/abs/2307.00601v1
|
20230702155824
|
Effective impurity behavior emergent from non-Hermitian proximity effect
|
[
"Deguang Wu",
"Jiasong Chen",
"Wei Su",
"Rui Wang",
"Baigeng Wang",
"D. Y. Xing"
] |
cond-mat.str-el
|
[
"cond-mat.str-el"
] |
National Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing 210093, China
National Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing 210093, China
College of Physics and Electronic Engineering, Center for Computational Sciences, Sichuan Normal University, Chengdu 610068, China
Beijing Computational Science Research Center, Beijing 100084, China
[email protected]
National Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing 210093, China
Collaborative Innovation Center for Advanced Microstructures, Nanjing 210093, China
National Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing 210093, China
Collaborative Innovation Center for Advanced Microstructures, Nanjing 210093, China
National Laboratory of Solid State Microstructures and Department of Physics, Nanjing University, Nanjing 210093, China
Collaborative Innovation Center for Advanced Microstructures, Nanjing 210093, China
Effective impurity behavior emergent from non-Hermitian proximity effect
D. Y. Xing
August 1, 2023
========================================================================
Abstract
Non-Hermitian boundaries commonly take place in many open quantum systems locally coupled to a surrounding environment. Here, we reveal a type of non-Hermitian effect induced by non-Hermitian boundaries, the non-Hermitian proximity effect (NHPE), which describes the penetration of non-Hermiticity from the boundary into the bulk. For gapped quantum systems, the NHPE generates in-gap states with imaginary eigenenergies, termed “imaginary in-gap states". The imaginary in-gap states are localized at the system boundary and decay into the bulk, analogous to the behaviors of the conventional impurity states. However, in contrast to impurity states, the imaginary in-gap states exhibit distinct dynamical behaviors under time-evolution. Moreover, they are physically manifested as corner modes under open boundaries, as a combined result of the non-Hermitian skin effect (NHSE) and NHPE. These results not only uncover implicit similarities between quantum systems with non-Hermitian boundaries and impurity physics, but also point to intriguing non-Hermitian phenomena broadly relevant to open quantum systems.
Introduction
Quantum systems that interact with an environment are ubiquitous in physics. The study of the interaction between such systems and baths is of fundamental interest as it leads to various novel quantum effects <cit.> and applications <cit.>. In open quantum systems, although the system and the bath as a whole is Hermitian, the dynamics of the partial system alone can be described by an effective non-Hermitian model <cit.>. This has recently aroused enormous interest in uniform non-Hermitian quantum systems, where exotic physics have been found, including novel topological phases <cit.>, skin effects <cit.>, enriched classifications <cit.>, and unusual critical phenomena <cit.>.
It is however important to note that non-Hermiticity in realistic open quantum systems can be non-uniform, and in many cases, is only present at the system boundary. We consider systems interacting with their surrounding environments via short-range couplings across the boundaries. The general Hamiltonian reads as H=H_S+H_E+V_SE, where H_S describes the closed quantum system, H_E captures the environment consisting of a continuum of scattering wavefunctions, and V_SE describes the coupling between them, which is short-ranged and restricted to the region around the boundary. The renormalization of quantum system from the environment via V_SE is non-negligible <cit.>, which formally leads to the effective Hamiltonian as
H_eff=H_S+∑_E V_SE1/ω^+-H_EV^†_SE,
where ω^+=ω+i0^+ and the second term includes the sum over all the scattering channels in the environment. This term arising from system-bath coupling is clearly non-Hermitian. Importantly, since V_SE is restricted in real space around the boundary, the non-Hermitian terms take place only at the boundary. The non-Hermitian boundaries implicit in Eq. (<ref>) are common in open quantum systems. However, their effects have not received enough attention in recent studies. Could new phenomena emerge from the non-Hermitian boundaries, and how the non-Hermitian boundaries would affect the quantum systems and their dynamics? These key questions are yet to be addressed.
In this work, we reveal a effect induced by non-Hermitian boundary, i.e., non-Hermitian proximity effect (NHPE). We show that the non-Hermiticity of the boundary can penetrate into the nearby quantum systems with a finite penetration depth, akin to the proximity effect of superconductors <cit.>. For gapped quantum systems, the NHPE induces “in-gap states" with imaginary eigenenergies, i.e., imaginary in-gap states. The imaginary in-gap states display peaks of local density of states (LDOS) clearly distinct from that of the bulk states. In addition, the imaginary in-gap states decay into the bulk in a way phenomenologically similar to that of conventional impurity problems <cit.>, where the localized in-gap states decay into the bath <cit.>. Despite the similarity, the imaginary in-gap states also exhibit several distinct properties. First, the imaginary in-gap states are generally manifested as corner modes under open boundary conditions (OBCs), due to the combined effect of the NHPE and the non-Hermitian skin effect (NHSE). Second, the NHPE leads to unusual dynamical behaviors of the quantum system. In particular, in the cases where the imaginary part of the imaginary in-gap state eigenenergy is positive, the probability distribution of imaginary in-gap state keeps increasing due to the gain from environment. As a result, under time-evolution, all wave-packets will be evolved into the imaginary in-gap states displayed as corner modes under OBCs.
Our work reveals that impurity-like behaviors with unusual dynamics can emerge from quantum models with non-Hermitian boundaries. This points to a new direction that interwinds impurity physics and non-Hermitian effects, which could commonly take place in open quantum systems.
Results
Imaginary in-gap states.
We exemplify our study by starting with the Qi-Wu-Zhang (QWZ) model <cit.> describing 2D Chern insulators on a square lattice (Fig. <ref>a), i.e.,
H_0 =∑_x,y{[c^†_x+1,y(iσ_x-σ_z)/2c_x,y +c^†_x,y+1
(iσ_y-σ_z)/2c_x,y]+
c^†_x,ymσ_zc_x,y}+h.c.,
where c(x,y)=[c_a,x,y,c_b,x,y]^T is the spinor with a, b sublattice. We further consider the non-Hermitian terms along the boundary y=1:
H_γ=i∑_xc^†_x,1γσ_xc_x,1.
Eq. (<ref>) describes a non-Hermitian boundary as indicated by the red sites in Fig. <ref>a, which can be generated by coupling H_0 to an environment at y=1.
We first assume the cylinder geometry, i.e., the periodic boundary condition along x and OBC along y with y∈[1,L_y]. The real and imaginary part of energy spectrum are numerically obtained and shown in Figs. <ref>b and c, respectively. Two chiral edge states of the Chern insulator are observed in the real spectrum, connecting the conduction and valence band. These two edge states are respectively localized at the upper (y=1) and lower (y=L_y) boundary, as shown by Fig. <ref>d. It is also shown in Fig. <ref>c that the upper chiral mode displays nonzero imaginary spectrum while the lower mode remains real. This is expected since the non-Hermitian term γ is only applied on the upper boundary.
Despite the chiral edge states, we also observe two channels of in-gap modes for -0.6≲ k_x ≲0.6, as marked by the red solid curves in Fig. <ref>b. These in-gap modes do not connect the conduction and valence band, suggesting that they have a non-topological physical origin. Both channels describe edge states localized at the upper boundary, as evidenced by Fig. <ref>d. Moreover, as shown by Fig. <ref>c, each mode with fixed k_x (marked by a red dot) in the two edge states exhibits nonzero imaginary eigenenergy, thus is termed as the imaginary in-gap state. We note that the dispersion of imaginary in-gap states has finite extensions into the bulk states with displaying “tail" states, as marked by the red dashed curves in Fig. <ref>b.
Non-Hermitian proximity effect. To reveal the origin of imaginary in-gap states, we first perform a real-space renormalization group (RG) analysis (see Supplementary Note 1). We decompose the total system H=H_0+H_γ into a series of coupled 1D horizental layers, i.e., H=∑_k_x[∑^L_y_l=1H_l,k_x+∑^L_y-1_l=1H_l,l+1,k_x],
where H_l,k_x is the Hamiltonian of the l-th layer, and H_l,l+1,k_x depicts the coupling between the l-th and (l+1)-th layer. The non-Hermitian boundary, i.e., Eq. (<ref>), is included in H_1,k_x, which reads as, H_1,k_x=(sin k_x+iγ)σ_x+(m-cos k_x)σ_z. Then, we integrate out the l=1 layer and obtain a renormalized Hamiltonian for the rest of the system, whose top layer (l=2) now becomes non-Hermitian. Then, as indicated by Fig. <ref>a, by iteratively repeating the above procedure, the effective Hamiltonian for the (l+1)-th layer can be obtained after the l-th RG step as,
H^eff_l+1,k_x=sin k_xσ_x+(m-cos k_x)σ_z+f_l(σ_0+σ_x),
where f_l is the complex self-energy with k_x-dependence. In Fig. <ref>b, we plot the calculated Im[f_l], as a function of the RG step l. As shown, Im[f_l] decays with l for all k_x, and saturates to the fixed point Im[f_l]=0 for large l. This shows that the non-Hermitian terms on the boundary can effectively penetrate into the bulk with a penetration depth. Due to its analogy with the proximity effect of superconductivity where Cooper pairs leak into the adjacent system, we term it the NHPE.
To better visualize the NHPE, we adopt OBCs along both x and y directions, and calculate the spatial distribution of the eigenstates of H. The typical spatial distribution of the bulk states is shown by Fig. <ref>c. It is clear that the bulk states remain as Bloch states and are barely affected by the non-Hermitian effects. This is expected from Fig. <ref>b, since the non-Hermitian terms are negligible for bulk states far away from the upper boundary. Then, we focus on the imaginary in-gap states and pick up their corresponding eigenstates under OBCs. The typical spatial distribution of imaginary in-gap states are shown in Figs. <ref>d and e. Due to the NHSE, which is significant near y=1, the imaginary in-gap states are found to localize either at the right or left boundary, depending on the sign of γ (Figs. <ref>d and e respectively). Moreover, they are also localized at the upper boundary due to the NHPE, with a short localization length along y up to a few layers. Therefore, the combination of NHSE and NHPE generally drives the imaginary in-gap states into corner modes under OBCs. In addition, we also plot in Fig. <ref>f the distribution of the “tail states" (the dashed curve in Fig. <ref>b) deep in the bulk. Compared to the imaginary in-gap states, these states are found to exhibit longer localization length along y because of their smaller imaginary part of eigenenergies.
Analogy between imaginary in-gap states and impurity states.
So far, we have shown that the imaginary in-gap states manifest themselves as edge modes and corner modes under the cylinder geometry and OBCs, respectively. For both cases, the imaginary in-gap states are localized at the upper boundary and decay along y, as a result of the NHPE induced by the non-Hermitian boundary.
To further reveal the physical nature of imaginary in-gap states, we now demonstrate their underlying similarity with the conventional impurity states in gapped systems. Under the cylinder geometry, k_x is a good quantum number. The total system can then be written as H=∑_k_xH_k_x, and
H_k_x =∑_y [c^†_k_x,y+1T_yc_k_x,y+h.c.]
+∑_y ϵ_k_xc^†_k_x,y c_k_x,y+ic^†_k_x,1γσ_xc_k_x,1,
where c_k_x,y=[c_a,k_x,y,c_b,k_x,y]^T, T_y=(iσ_y-σ_z)/2 and ϵ_k_x=[sink_xσ_x+(m-cosk_x)σ_z]. For fixed k_x, H_k_x describes a 1D vertical chain model with a non-Hermitian term on its first site, as indicated by Fig. <ref>a. It is clear that the localization and decay behaviors of an imaginary in-gap state is fully captured Eq. (<ref>).
We now calculate the LDOS from Eq. (<ref>). The density of states (DOS) can be obtained from the imaginary part of the retarded Green's function GF, i.e., ρ(ϵ)=-1/πImTr[∑_n|ψ^R_n⟩⟨ψ^L_n|/ϵ+i0^+-E_n], where E_n is the n-th complex eigenvalues. Here, compared to the Hermitian cases, the support of DOS is expanded to the complex plane <cit.>, i.e., ϵ=ϵ_r+iϵ_i. Besides, both the left and right eigenstates have been used to form the complete basis <cit.> that expands the GF. Then, the LDOS at the l-th site of the chain can be derived as ρ_l(ϵ_r,ϵ_i)=1/N∑_nδ(ϵ_r-Re[E_n])δ(ϵ_i-Im[E_n])|⟨ l|ψ^R_n⟩⟨ψ^L_n|l⟩|.
We show in Figs. <ref>a and b the calculated LDOS, with focusing on the imaginary in-gap state region (-0.6≲ k_x≲0.6). At the boundary site y=1 (Fig. <ref>a) of the 1D vertical chain, except for the topological edge mode from the upper chiral edge state, we observe two significant LDOS peaks located at energies with ϵ_i<0. These peaks come from the two imaginary in-gap states localized at y=1. Moving away from y=1, the imaginary in-gap state peaks quickly decay. Meanwhile, the LDOS from the bulk states emerges, which is located on the real energy axis. As shown by Fig. <ref>b, for y≫1, both the peaks from the imaginary in-gap states and the topological edge state disappear, leaving only the bulk states. The LDOS projected to the real energy axis is also shown in Figs. <ref>c and d for clarity.
To clearly show the similarity between imaginary in-gap states and impurity states, we consider a non-interacting pseudospin Anderson impurity model coupled to the QWZ model, i.e., H_imp=H_0+H_f, where the impurity is described by
H_f=∑_σϵ_ff^†_σf_σ+V∑_𝐤,σ(f^†_σc_𝐤,σ+h.c.).
f_σ is the annihilation operator of the impurity state with pseudospin (sublattice) σ, ϵ_f denotes the impurity energy level (independent of σ), and V its hybridization with the bath electrons. Although the impurity considered in Eq. (<ref>) is located in the bulk of the 2D Chern insulator as shown by Fig. <ref>e, the low-energy impurity state is accurately determined by an 1D Wilson open chain following the numerical renormalization group mapping scheme (see Supplementary Note 4), as schematically shown by Fig. <ref>e.
The Wilson open chain shares a similar structure with the 1D vertical chain model in Fig. <ref>a. Both of them describe a boundary site (y=1) coupled to bath degrees of freedom (y>1) via the nearest neighbor coupling. It should be noted that the y>1 sites on the Wilson chain represent for the bath states in discretized energy windows. The site being closer to the impurity (y=1) describes the states in the higher energy window, i.e., in the shorter length scale measured from the impurity.
We then calculate the LDOS on different sites of the Wilson chain, as shown in Figs. <ref>f and g. For y being close to the impurity, two in-gap states emerge, due to the coupling of the impurity to the conduction and valence band electrons. With tuning y away from the impurity, the LDOS from the impurity states decreases and that from the bulk states increases. For large y, only the LDOS of the low-energy bulk states are left, which are located around the gap energy, as shown by Fig. <ref>g. The LDOS shown in Figs. <ref>f and g is qualitatively in analogy with the projected LDOS in Figs. <ref>c and d, implying the similarity between the imaginary in-gap states and the impurity physics.
Distinct dynamical behaviors of imaginary in-gap states.
Despite the similarity, the imaginary in-gap states also exhibit some distinct features. First, since the LDOS peaks of imaginary in-gap states are located in the complex energy plane with ϵ_i≠0, they can persist even if their real parts of eigenenergies are immersed in the bulk continuum (see Supplementary Note 2). This explains the existence of the “tail states" observed by Fig. <ref>b.
Second, the imaginary in-gap states exhibit unusual dynamical behaviors absent in the Hermitian systems, as a result of their imaginary eigenenergies. Under time-evolution, the amplitude of an imaginary in-gap state either grows or decays, depending on the sign of γ (see Supplementary Note 3). This is a reflection of the non-Hermitian nature of the imaginary in-gap states, in contrast with the conventional impurity states.
For a Gaussian wave-packet input at the bottom layer of system, the time-evolution of the wave-packet is calculated under OBCs and shown in Figs. <ref>a-c (for γ<0). The wave-packet first diffuses into the bulk (Fig. <ref>b) and then gradually evolves into the imaginary in-gap state eigenstate localized at the top layer. Due to the NHSE, it finally turns into the corner state as shown in Fig. <ref>c. We also start from a Gaussian wave-packet placed at any generic site at the top layer. As shown by Fig. <ref>d, the wave-packet also evolves into the imaginary in-gap state located at the corner. Hence, we observe that, for γ<0 where the amplitude of imaginary in-gap states grows, a generic Gaussian wave-packet will always evolve into the cornered imaginary in-gap state (see Supplementary Note 3 for the γ>0 case). Such a dynamical feature is a clear distinction between imaginary in-gap states and conventional in-gap states.
Discussion
This work reveals an impurity-like non-Hermitian phenomenon induced by non-Hermitian boundaries. Although the imaginary in-gap state constitute an edge mode, each individual imaginary in-gap state can be further explored to simulate impurity physics with novel non-Hermitian properties <cit.>. For example, by properly introducing Hubbard interaction U on the boundary, the corresponding 1D vertical chain model in Eq. (<ref>) would bare similarities with the Wilson chain mapped from a finite U Anderson model. Hence, Kondo-like behaviors could emerge but enriched by new features arising from the non-Hermiticity.
Although a non-Hermitian boundary with alternating gain and loss in Eq. (<ref>) is studied as an example, as we show in Methods, the NHPE and the corresponding imaginary in-gap states remain intact for gain-only or loss-only boundaries as well. In addition, we also investigate a more general 2D insulator model with non-Hermitian boundaries. As shown in Methods, although the NHSE is absent in this model, the NHPE still persists, leading to imaginary in-gap states that are manifested by a localized edge mode shown in Fig. <ref> (rather than the corner mode in Fig. <ref>d). The model-independence indicates that the NHPE should be a more general non-Hermitian effect than NHSE.
The theoretical model studied here could be realized in different experimental platforms. For example, the Chern insulator model H can be realized by magnetically doped topological insulators <cit.>. Moreover, the non-Hermitian boundary terms can be realized on the basis of reservoir engineering. The loss or gain on the boundary can be achieved by using a nonlocal coupling to auxiliary degrees of freedom which undergo local loss or gain <cit.>, as discussed in details in Methods. Moreover, the 2D topological insulating phases can also be realized by cold atoms in optical lattices or photons in coupled cavities <cit.>. In particular, in arrays of coupled micro-ring cavities, the photon gain and loss for each cavity can be controlled independently <cit.>. These provide promising platforms to further investigate the predicted NHPE, the imaginary in-gap states, as well as their dynamical behaviors.
Methods
Real-space renormalization group analysis of non-Hermitian proximity effect.
For the open boundary perpendicular to the y-direction, the total system of a 2D lattice can be decomposed into a series of horizontal layers (labelled by l) with inter-layer coupling, and its partition function is then cast into 𝒵=∫∏_l𝒟ψ̅_l𝒟ψ_le^-(S_l+S_l,l+1), where
S_l =-∑_iω_n∑_k_x∈ BZψ̅_k_x,iω_n,l
[iω_n-H_l(k_x)]ψ_k_x,iω_n,l,
where ψ_k_x,iω_n,l=[c_a,k_x,iw_n,c_b,k_x,iw_n]^T is the Grassmann field. H_l(k_x) is the Hamiltonian of the l-th horizental chain. The action for the inter-layer coupling can be written as
S_l,l+1=-∑_iω_n∑_k_x∈ BZ(ψ̅_k_x,iω_n,lT_yψ_k_x,iω_n,l+1+h.c),
where T_y is the matrix describing the inter-layer hopping, which is assumed to be l-independent. Since the action is bilinear in terms of Grassmann fields, we can integrate out the first layer and obtain an renormalized effective action for the second layer as
S_2=-∑_iω_n∑_k_x∈ BZψ̅_k_x,iω_n,2G^-1_2(iω_n,k_x)ψ_k_x,iω_n,2,
where G^-1_2(ω,k_x)=ω^+-H_2(k_x)-T_y^†(ω^+-H_1(k_x))^-1T_y
is the renormalized retarded Green's function of the second layer with ω^+=ω+i0^+. To extract the low-energy effective Hamiltonian, the low-frequency approximation can be made in G^-1_l(ω,k_x),
which well preserves the low-energy physics as long as the layer integrated out remains gapped (which is indeed the case for the QWZ model studied here, where each layer describes a 1D Su-Schrieffer-Heeger (SSH) model). Then, the effective Hamiltonian of the second layer is read off as
H_2^eff(k_x)=H_2(k_x)-T_y^†1/H_1(k_x)T_y.
By treating the renormalized second layer as the first layer on top of the remaining system, then the above procedure can be performed iteratively, leading to the effective action for the renormalized third, fourth... layer. After l RG steps, an iterative relation between the effective Hamiltonian of the l-th and that of the (l+1)-th layer can be obtained as
H_l+1^eff(k_x)=H_l(k_x)-T_y^†1/H_l^eff(k_x)T_y.
Now we apply the real-space RG transformation to QWZ model with a non-Hermtian boundary, the first layer Hamiltonian with non-Hermitian terms is given by
H_1(k_x)=(sin k_x+iγ)σ_x+(m-cos k_x)σ_z,
which is a non-Hermitian version of the SSH model and exhibits non-Hermitian skin effect <cit.>.
We then derive the effective Hamiltonian for the renormalized l-th layer under the real-space RG transformation. For any 2×2 matrix A, it can be expanded into a linear combination, A=aσ_0+bσ_x+cσ_y+dσ_z, where σ_x,y,z and σ_0 are the Pauli matrices and identity matrix, respectively. Then, we have
T_y^†A^-1T_y=a+b/2 Det[A](σ_0+σ_x),
with Det[A]=a^2-(b^2+c^2+d^2). Using the Eqs. (<ref>), (<ref>) and (<ref>), we have
H_2^eff(k_x)=sink_xσ_x+(m-cosk_x) σ_z+f_1(σ_0+σ_x),
where f_1=(sink_x+iγ)/[2(2-2cosk_x+2iγsink_x-γ^2)] is the self-energy due to the renormalization of the first layer. Accordingly, we can express the effective Hamiltonian of the (l+1)-th layer in the same form as Eq. (<ref>), i.e., Eq. (<ref>). Substituting it into the Eqs. (<ref>) and (<ref>), an iterative relation between f_l+1 and f_l can be obtained
f_l+1=sink_x+2f_l/2(2sink_xf_l-2mcosk_x+m^2+1).
The fixed points, if exist, can be determined by requiring f_l+1=f_l=f_c for l→∞. This leads to the following equation
4sink_xf_c^2-2m(2cosk_x-m)f_c-sink_x=0.
Since =4m^2(2cosk_x-m)^2+16sink_x^2⩾0, the two roots f_c1 and f_c2 of Eq. (<ref>) are given by
f_c1/c2=2m(2cosk_x-m)±√()/8sink_x,
which implies that the self-energy f_l are real at the fixed points. This explains why the imaginary part of f_l eventually vanishes for large l, as shown in Fig. <ref>b. In additional, we can prove that
f_l eventually flows to only one of the fixed points f_c1. More subtle details can be found in the Supplementary Note 1.
General existence of NHPE in gapped states with non-Hermitian boundaries.
Although the non-Hermitian effects are derived from Eqs. (<ref>) and (<ref>), which describe a Chern insulator with a boundary that has alternating gain and loss (on a and b sites), the NHPE found here is in fact general. To demonstrate its generality, we first consider the non-Hermitian boundaries with only gain or only loss. Hence, we replace Eq. (<ref>) by
H_γ=i∑_xc^†_x,1γσ_0c_x,1,
which describes the gain and loss on all the sites (y=1) for γ>0 and γ<0, respectively. We then calculate the real and imaginary part of energy spectrum of the total system
H=H_0+H_γ under cylinder geometry, which are respectively shown in Figs. <ref>a and b. As we see from Figs. <ref>a and b, two imaginary in-gap states still emerge when only loss is present, similar to the results for the boundaries with alternating gain and loss. By comparing Figs. <ref>a and b with Figs. <ref>b and c, we find that the only slight difference is that the imaginary part of the eigenenergies of the upper chiral edge mode now becomes negative. This does not affect the main conclusions on the imaginary in-gap states and the NHPE.
The general existence of NHPE can be understood in the RG sense. The RG analysis discussed above is general, and can be applied to gapped quantum systems with a generic non-Hermitian boundary. For instance, we consider the Eq. (<ref>) being added to the QWZ model and perform the real-space RG analysis. Similar to the RG analysis for the case with alternating loss and gain, the bulk Hamiltonian can be decomposed into different layers coupled via hopping matrix T_y. The key fact here is that the iterative relation between the effective Hamiltonian of the l-th and that of the (l+1)-th layer, i.e., Eq. (<ref>), does not change as long as T_y remains the same. Thus, the effective Hamiltonian for the (l+1)-th layer can still be written as Eq. (<ref>) which leads to the same result as Eq. (<ref>). This indicates that the different non-Hermitian boundary terms only leads to different initial value f_1, and the RG flow of f_l remains qualitatively the same, independent of f_1. Consequently, the same fixed point with Im[f_l]=0 will always be reached for large l. This means that NHPE is general and does not depend on the form of non-Hermitian boundary.
Since the RG analysis shows that the non-Hermitian effect induced by the boundary would always decay and vanish for large distances from the boundary, the effects from different non-Hermitian boundaries would become independent from each other for systems larger than the decay length. Therefore, similar non-Hermitian phenomena is expected if non-Hermitian terms are considered on both the upper and lower boundaries.
To further support the RG results, we also calculate the typical spatial distribution of imaginary in-gap state under OBCs. As shown in the Fig. <ref>c, corner states still emerge for the gain-only or loss-only boundaries, as a result of the combined effect of the NHPE and NHSE, indicating the general existence of NHPE.
We now show that the NHPE is not only independent of the specific forms of the boundary, but is also independent of the specific models of the insulating bulk. Instead of the QWZ model, we now consider a more general 2D two-band model with a bulk gap m described by the following Hamiltonian:
H_0(k_x,k_y)=∑_i=x,y(cosk_iσ_x+sink_iσ_y)+mσ_z,
along with the non-Hermitian boundary term H_γ=i∑_xc^†_x,1γσ_xc_x,1, i.e., the Eq. (<ref>). Then we calculate the energy spectrum under cylinder geometry. Figs. <ref>d and e show the real and imaginary part of the energy spectrum, which clearly indicate the emergence of the imaginary in-gap states, similar to those for the QWZ model.
As discussed above, the general existence of NHPE can be proved by real-space RG calculations. We therefore decompose the total system H=H_0+H_γ (where H_0 is given by Eq. (<ref>) above) into a series of coupled 1D horizontal layers. The Hamiltonian of the l=1 layer reads as H_1,k_x=(cosk_x+iγ)σ_x+sink_xσ_y+mσ_z and the inter-layer hopping matrix is T_y=σ^+. Then, we can derive the renormalized Hamiltonian for the (l+1)-th layer by iteratively integrating out the l-th layer, leading to:
H_l+1,k_x^eff=cosk_xσ_x+sink_xσ_y+mσ_z+f_l(σ_0-σ_z),
where complex self-energy f_l encodes the non-Hermitian effect. Similarly, it can be proved that f_l always saturates to the same fixed point with Im[f_l]=0 for large l. This clearly suggests the boundary-induced non-Hermitian effect decays and finally vanishes deep in the bulk for large l. Since Eq. (<ref>) is the minimal model of 2D insulators without any specific requirements, the NHPE found here is expected to be a general result for gapped quantum systems with a non-Hermitian boundary. This is also consistent with the emergence of imaginary in-gap states shown in Figs. <ref>d and e.
In particular, we mention that the NHSE is model-dependent and it does not occur in the above 2D insulator model. However, the NHPE remains intact. Correspondingly, the distribution of imaginary in-gap state calculated under open boundaries no longer shows up as a corner mode, but is manifested by an edge mode evenly distributed along the upper boundary (Fig. <ref>f). This indicates that the NHPE could be a more general non-Hermitian effect than the NHSE.
Local density of states of the non-Hermitian vertical chain model.
In the Hermitian systems, the DOS can be calculated from the imaginary part of the retarded Green's function ρ(ϵ)=-1/πIm Tr [∑_n |ψ_n⟩⟨ψ_n|/ϵ+iη-E_n], where η=0^+, H|ψ_n⟩=E_n|ψ_n⟩, n=1,2,…,N. For the LDOS at the l-th site, we have
ρ_l(ϵ)
=-1/πIm[∑_n⟨ l|ψ_n⟩⟨ψ_n|l⟩/ϵ+iη-E_n]
=1/π∑_nη⟨ l|ψ_n⟩⟨ψ_n|l⟩/(ϵ-E_n)^2+η^2
=∑_nδ(ϵ-E_n)|⟨ l|ψ_n⟩|^2.
The last step in the above equation exploits the fact that the Dirac delta function is the limit of the Lorentzian function with η going to zero, and the final result goes back to the definition of the DOS. However, Eq. (<ref>) no longer holds for non-Hermitian Hamiltonians, since E_n is now complex which makes it invalid. Thus, the way to calculate the LDOS has to be generalized to capture the non-Hermitian systems <cit.>. For a non-Hermitian Hamiltonian, the left or right eigenstates alone do not satisfy the orthogonality condition; both the left and right
eigenstates should be used which satisfy the bi-orthogonal relation <cit.>, i.e.,
⟨ψ_n^L|ψ_n^R⟩=δ_mn,
where H|ψ_n^R⟩=E_n|ψ_n^R⟩, H^†|ψ_n^L⟩=E_n^∗|ψ_n^L⟩, n=1,2,…,N. Incorporating it into Eq. (<ref>), the LDOS at the l-th site of the 1D vertical chain model can be expressed as
ρ_l(ϵ_r,ϵ_i)= 1/N∑_nδ(ϵ_r-Re[E_n])δ(ϵ_i-Im[E_n])
×|⟨ l|ψ^R_n⟩⟨ψ^L_n|l⟩|,
where ρ_l(ϵ_r,ϵ_i) is defined on the complex energy plane, where ϵ_r (ϵ_i) represents the real (imaginary) part of the complex energy ϵ. In our numerical calculations, since the 1D vertical chain model has sublattice degrees of freedom, N=2L_y with L_y is the 1D chain length, and the phase factor ⟨ l |ψ_n^R⟩⟨ψ_n^L| l ⟩ includes the sum of the (2l-1,2l-1) and (2l,2l) matrix elements of the 2L_y×2L_y matrix. Besides, in the plot of the LDOS, the δ-peaks are broadened by a finite width controlled by a factor b. This is achieved by treating the Dirac-δ function as a Lorentzian form δ(ω-ω_n)→1/2πb/(ω-ω_n)^2+b^2.
Time evolution of Gaussian wave-packet. We set the initial state as a 2D Gaussian wave-packet
ψ(t=0)=1/(4πσ^2)^1/2exp[-(x-x_0)^2/4σ^2-(y-y_0)^2/4σ^2](1,1)^T,
where (x_0,y_0) represents the center of wave-packet. According to the Schrödinger equation i∂_t|ψ(t)⟩=H|ψ(t)⟩, the
time-evolution operator of the non-Hermitian system is expressed as
U(t)=e^-iHt=∑_n e^-iE_nt|ψ^R_n⟩⟨ψ^L_n|,
which shows that those modes with Im[E_n]<0 will vanish due to the exponentially decaying factor, whereas those with Im[E_n]>0 will dominate in the long-time limit. For the model of Chern insulator with a non-Hermitian boundary in our work, since the term H_γ contains an imaginary unit factor, the total Hamiltonian satisfies H^*(γ)=H_0-H_γ=H_0+
H_-γ=H(-γ). So the sign of γ determines the sign of Im[E_n], which in turn determines whether the mode amplitude grows or decays.
Realization of the non-Hermitian boundaries via coupling to environments.
In open quantum systems, the coupling between the system and the environment can be described by Lindblad master equation, i.e.,
dρ/dt=-i[H,ρ]+∑_μ(2L_μρ L_μ^†-{L_μ^† L_μ,ρ}),
where ρ is the density matrix, L_μ's are the Lindblad dissipators describing quantum jumps due to coupling to the environment. The short-time evolution is described by the Schrödinger evolution under the effective non-Hermitian Hamiltonian H_eff=H-i∑_μL_μ^† L_μ with dρ/dt=-i(H_effρ-ρ H_eff^†) <cit.>. Considering the single particle loss and gain with the loss and gain dissipators:
L_μ^l =√(γ_l)(c_μ a+c_μ b),
L_μ^g =√(γ_g)(c_μ a^†+c_μ b^†).
Eq. (<ref>) leads to the effective Hamiltonian, which reads in momentum space as
H_eff=H+i(γ_g-γ_l)(σ_0+σ_x).
The second term in Eq. (<ref>) has a form that produces the non-Hermitian boundary studied in Eq. (<ref>) and Eq. (<ref>). The loss and gain dissipators in Eq. (<ref>) can be realized in reservoir engineering by using nonlocal couplings to auxiliary degrees of freedom which undergo local loss or gain <cit.>.
Data availability
The data generated and analyzed during this study are available from the corresponding author upon reasonable request.
Code availability
All code used to generate the plots within this paper are available from the corresponding author upon reasonable request.
Acknowledgments
This work was supported by National Natural Science Foundation of China (Grant No. 12274206, No.12034014 and No.11904245), the Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0302800), National Key R&D Program of China (Grant No. 2022YFA1403601), and the Xiaomi foundation.
Author contributions
All authors performed the calculations, discussed the results and prepared the manuscript.
Competing interests
The authors declare no competing financial interests.
62
References
natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL
[Breuer et al.(2002)Breuer, Petruccione
et al.]breuer2002theory
authorBreuer, H. P.,
authorPetruccione, F.
et al. titleThe theory of open quantum
systems (publisherOxford University Press on Demand,
year2002).
[Landi et al.(2022)]landi2022nonequilibrium
authorLandi, G. T.,
authorPoletti, D. and
authorSchaller, G.
titleNonequilibrium boundary-driven quantum systems: Models, methods, and properties.
journalRev. Mod. Phys. volume94,
pages045006 (year2022).
[Carmichael(2009)]carmichael2009open
authorCarmichael, H.
titleAn open systems approach to quantum optics
(publisherSpringer Science & Business Media,
year2009).
[Rotter(2009)]rotter2009non
authorRotter, I.
titleA non-Hermitian Hamilton operator and the physics of open quantum systems.
journalJ. Phys. A: Math. Theor. volume42, pages153001
(year2009).
[Bender and Boettcher(1998)]bender1998real
authorBender, C. M. and
authorBoettcher, S.
titleReal spectra in non-Hermitian Hamiltonians having PT symmetry.
journalPhys. Rev. Lett. volume80,
pages5243 (year1998).
[Esaki et al.(2011)Esaki, Sato, Hasebe,
and Kohmoto]esaki2011edge
authorEsaki, K.,
authorSato, M.,
authorHasebe, K. and
authorKohmoto, M.
titleEdge states and topological phases in non-Hermitian systems.
journalPhys. Rev. B volume84,
pages205128 (year2011).
[Lee(2016)]lee2016anomalous
authorLee, T. E.
titleAnomalous edge state in a non-Hermitian lattice.
journalPhys. Rev. Lett. volume116,
pages133903 (year2016).
[Yao et al.(2018)Yao, Song, and
Wang]yao2018non
authorYao, S.,
authorSong, F. and
authorWang, Z.
titleNon-hermitian chern bands.
journalPhys. Rev. Lett. volume121,
pages136802 (year2018).
[Kunst et al.(2018)Kunst, Edvardsson,
Budich, and Bergholtz]kunst2018biorthogonal
authorKunst, F. K.,
authorEdvardsson, E.,
authorBudich, J. C.
and authorBergholtz, E. J.
titleBiorthogonal bulk-boundary correspondence in non-Hermitian systems.
journalPhys. Rev. Lett.
volume121, pages026808
(year2018).
[Kawabata et al.(2018)Kawabata, Shiozaki,
and Ueda]kawabata2018anomalous
authorKawabata, K.,
authorShiozaki, K. and
authorUeda, M.
titleAnomalous helical edge states in a non-Hermitian Chern insulator.
journalPhys. Rev. B volume98,
pages165148 (year2018).
[Gong et al.(2018)Gong, Ashida, Kawabata,
Takasan, Higashikawa, and Ueda]gong2018topological
authorGong, Z.,
authorAshida, Y.,
authorKawabata, K.,
authorTakasan, K.,
authorHigashikawa, S.
and authorUeda, M.
titleTopological phases of non-Hermitian systems.
journalPhys. Rev. X volume8,
pages031079 (year2018).
[Lee et al.(2019a)Lee, Ahn,
Zhou, and Vishwanath]lee2019topological
authorLee, J. Y.,
authorAhn, J.,
authorZhou, H. and
authorVishwanath, A.
titleTopological correspondence between Hermitian and non-Hermitian systems: anomalous dynamics.
journalPhys. Rev. Lett. volume123,
pages206404 (year2019a).
[Wang et al.(2019a)Wang,
Ruan, and Zhang]wang2019non
authorWang, H.,
authorRuan, J. and
authorZhang H.
titleNon-Hermitian nodal-line semimetals with an anomalous bulk-boundary correspondence.
journalPhys. Rev. B volume99,
pages075130 (year2019a).
[Kawabata et al.(2019)Kawabata, Shiozaki,
Ueda, and Sato]kawabata2019symmetry
authorKawabata, K.,
authorShiozaki, K.,
authorUeda, M. and
authorSato, M.
titleSymmetry and topology in non-Hermitian physics.
journalPhys. Rev. X volume9,
pages041015 (year2019).
[Luo and Zhang(2019)]luo2019higher
authorLuo, X. W. and
authorZhang, C.
titleHigher-order topological corner states induced by gain and loss.
journalPhys. Rev. Lett. volume123,
pages073601 (year2019).
[Zhang et al.(2020)Zhang, Yang, and
Fang]zhang2020correspondence
authorZhang, K.,
authorYang, Z. and
authorFang, C.
titleCorrespondence between winding numbers and skin modes in non-Hermitian systems.
journalPhys. Rev. Lett. volume125,
pages126402 (year2020).
[Roccati(2023)]roccati2023hermitian
authorRoccati, F.
et al.
titleHermitian and non-Hermitian topology from photon-mediated interactions.
journalarXiv:2303.00762.
[Yao and Wang(2018)]yao2018edge
authorYao, S. and
authorWang, Z.
titleEdge states and topological invariants of non-Hermitian systems.
journalPhys. Rev. Lett. volume121,
pages086803 (year2018).
[Lee and Thomale(2019)]lee2019anatomy
authorLee, C. H. and
authorThomale, R.
titleAnatomy of skin modes and topology in non-Hermitian systems.
journalPhys. Rev. B volume99,
pages201103 (year2019).
[Lee et al.(2019b)Lee, Li,
and Gong]lee2019hybrid
authorLee, C. H.,
authorLi, L. and
authorGong, J.
titleHybrid higher-order skin-topological modes in nonreciprocal systems.
journalPhys. Rev. Lett. volume123,
pages016805 (year2019b).
[Longhi(2019)]longhi2019probing
authorLonghi, S.
titleProbing non-Hermitian skin effect and non-Bloch phase transitions.
journalPhys. Rev. Research volume1,
pages023013 (year2019).
[Okuma et al.(2020)Okuma, Kawabata,
Shiozaki, and Sato]okuma2020topological
authorOkuma, N.,
authorKawabata, K.,
authorShiozaki, K. and
authorSato, M.
titleTopological origin of non-Hermitian skin effects.
journalPhys. Rev. Lett. volume124,
pages086801 (year2020).
[Yi and Yang(2020)]yi2020non
authorYi, Y. and
authorYang, Z.
titleNon-Hermitian skin modes induced by on-site dissipations and chiral tunneling effect.
journalPhys. Rev. Lett. volume125,
pages186802 (year2020).
[Kawabata et al.(2020)Kawabata, Sato, and
Shiozaki]kawabata2020higher
authorKawabata, K.,
authorSato, M. and
authorShiozaki, K.
titleHigher-order non-Hermitian skin effect.
journalPhys. Rev. B volume102,
pages205118 (year2020).
[Okuma and Sato(2021)]okuma2021non
authorOkuma, N. and
authorSato, M.
titleNon-Hermitian skin effects in hermitian correlated or disordered systems: Quantities sensitive or insensitive to boundary effects and pseudo-quantum-number.
journalPhys. Rev. Lett. volume126,
pages176601 (year2021).
[Sun et al.(2021)Sun, Zhu, and
Hughes]sun2021geometric
authorSun, X. Q.,
authorZhu, P. and
authorHughes, T. L.
titleGeometric response and disclination-induced skin effects in non-Hermitian systems.
journalPhys. Rev. Lett. volume127,
pages066401 (year2021).
[Roccati et al.(2021)]roccati2021non
authorRoccati, F.
titleNon-Hermitian skin effect as an impurity problem.
journalPhys. Rev. A volume104,
pages022215 (year2021).
[Li et al.(2022)Li, Liang, Wang, Lu, and
Liu]li2022gain
authorLi, Y.,
authorLiang, C.,
authorWang, C.,
authorLu, C. and
authorLiu, Y. C.
titleGain-loss-induced hybrid skin-topological effect.
journalPhys. Rev. Lett. volume128,
pages223903 (year2022).
[Zhang et al.(2022)]zhang2022review
authorZhang, X.
et al.
titleA review on non-Hermitian skin effect.
journalAdvances in Physics: X volume7,
pages2109431 (year2022).
[Shen et al.(2018)Shen, Zhen, and
Fu]shen2018topological
authorShen, H.,
authorZhen, B. and
authorFu, L.
titleTopological band theory for non-Hermitian Hamiltonians.
journalPhys. Rev. Lett. volume120,
pages146402 (year2018).
[Zhou et al.(2018)Zhou, Peng, Yoon, Hsu,
Nelson, Fu, Joannopoulos, Soljačić, and
Zhen]zhou2018observation
authorZhou, H.,
authorPeng, C.,
authorYoon, Y.,
authorHsu, C. W.,
authorNelson, K. A.,
authorFu, L.,
authorJoannopoulos, J. D.,
authorSoljačić, M.
and authorZhen, B.
titleObservation of bulk Fermi arc and polarization half charge from paired exceptional points.
journalScience volume359,
pages1009 (year2018).
[Yin et al.(2018)Yin, Jiang, Li, Lü,
and Chen]yin2018geometrical
authorYin, C.,
authorJiang, H.,
authorLi, L.,
authorLü, R. and
authorChen, S.
titleGeometrical meaning of winding number and its characterization of topological phases in one-dimensional chiral non-Hermitian systems.
journalPhys. Rev. A volume97,
pages052115 (year2018).
[Yokomizo and Murakami(2019)]yokomizo2019non
authorYokomizo, K. and
authorMurakami, S.
titleNon-bloch band theory of non-hermitian systems.
journalPhys. Rev. Lett. volume123,
pages066404 (year2019).
[Song et al.(2019a)Song,
Yao, and Wang]song2019non
authorSong, F.,
authorYao, S. and
authorWang, Z.
titleNon-Hermitian topological invariants in real space.
journalPhys. Rev. Lett. volume123,
pages246801 (year2019a).
[Deng and Yi(2019)]deng2019non
authorDeng, T.-S. and
authorYi, W.
titleNon-Bloch topological invariants in a non-Hermitian domain wall system.
journalPhys. Rev. B volume100,
pages035102 (year2019).
[K. Kawabata and Sato(2020)]kawabata2020non
authorK. Kawabata, N. O.
and authorSato, M.
titleNon-Bloch band theory of non-Hermitian Hamiltonians in the symplectic class.
journalPhys. Rev. B volume101,
pages195147 (year2020).
[Yang et al.(2020)Yang, Zhang, Fang, and
Hu]yang2020non
authorYang, Z.,
authorZhang, K.,
authorFang, C. and
authorHu, J.
titleNon-Hermitian bulk-boundary correspondence and auxiliary generalized Brillouin zone theory.
journalPhys. Rev. Lett. volume125,
pages226402 (year2020).
[Kawabata et al.(2021)Kawabata, Shiozaki,
and Ryu]kawabata2021topological
authorKawabata, K.,
authorShiozaki, K. and
authorRyu, S.
titleTopological field theory of non-hermitian systems.
journalPhys. Rev. Lett. volume126,
pages216405 (year2021).
[Xue et al.(2021)Xue, Li, Hu, Song, and
Wang]xue2021simple
authorXue, W. T.,
authorLi, M. R.,
authorHu, Y. M.,
authorSong, F. and
authorWang, Z.
titleSimple formulas of directional amplification from non-bloch band theory.
journalPhys. Rev. B volume103,
pagesL241408 (year2021).
[Wu et al.(2022)Wu, Xie, Zhou, and
An]wu2022connections
authorWu, D.,
authorXie, J.,
authorZhou, Y. and
authorAn, J.
titleConnections between the open-boundary spectrum and the generalized Brillouin zone in non-Hermitian systems.
journalPhys. Rev. B volume105,
pages045422 (year2022).
[Rui et al.(2022)Rui, Zheng, Wang, and
Wang]rui2022non
authorRui, W. B.,
authorZheng, Z.,
authorWang, C. and
authorWang Z. D.
titleNon-Hermitian spatial symmetries and their stabilized normal and exceptional topological semimetals.
journalPhys. Rev. Lett. volume128,
pages226401 (year2022).
[Yoshida et al.(2018)Yoshida, Peters, and
Kawakami]yoshida2018non
authorYoshida, T.,
authorPeters, R. and
authorKawakami, N.
titleNon-Hermitian perspective of the band structure in heavy-fermion systems.
journalPhys. Rev. B volume98,
pages035141 (year2018).
[Song et al.(2019b)Song,
Yao, and Wang]song2019non1
authorSong, F.,
authorYao, S. and
authorWang, Z.
titleNon-Hermitian skin effect and chiral damping in open quantum systems.
journalPhys. Rev. Lett. volume123,
pages170401 (year2019b).
[Zhou et al.(2020)Zhou, Wang, and
Wang]zhou2020renormalization
authorZhou, B.,
authorWang, R. and
authorWang, B.
titleRenormalization group approach to non-Hermitian topological quantum criticality.
journalPhys. Rev. B volume102,
pages205116 (year2020).
[Meissner(1960)]meissner1960superconductivity
authorMeissner, H.
titleSuperconductivity of contacts with interposed barriers.
journalPhys. Rev. volume117,
pages672 (year1960).
[Liu et al.(2011)Liu, Wang, Xie, and
Yu]liu2011abelian
authorLiu, X.,
authorWang, Z.,
authorXie, X. C. and
authorYu, Y.
titleAbelian and non-Abelian anyons in integer quantum anomalous Hall effect and topological phase transitions via superconducting proximity effect.
journalPhys. Rev. B volume83,
pages125105 (year2011).
[Xu et al.(2014)Xu, Liu, Wang, Ge, Liu,
Yang, Chen, Liu, Xu, Gao et al.]xu2014artificial
authorXu, J. P.
et al.
titleArtificial topological superconductor by the proximity effect.
journalPhys. Rev. Lett. volume112,
pages217001 (year2014).
[Zhu et al.(2016)Zhu, Zhang, and
Zhang]zhu2016proximity
authorZhu, G. Y.,
authorZhang, F. C. and
authorZhang G. M.
titleProximity-induced superconductivity in monolayer CuO_2 on cuprate substrates.
journalPhys. Rev. B volume94,
pages174501 (year2016).
[Yu(1965)]yu1965bound
authorYu, L.
titleBound state in superconductors with paramagnetic impurities.
journalActa. Phys. Sin. volume21,
pages75 (year1965).
[Shiba(1968)]shiba1968classical
authorShiba, H.
titleClassical spins in superconductors.
journalProg. Theor. Phys. volume40,
pages435 (year1968).
[Rusinov(1969)]rusinov1969superconductivity
authorRusinov, A. I.
titleSuperconductivity near a paramagnetic impurity.
journalJETP Lett. volume9,
pages85 (year1969).
[Wang et al.(1996)Wang, Qin, Lu, Yu, and
Su]wang1996impurity
authorWang, W.,
authorQin, S.,
authorLu, Z. Y.,
authorYu, L. and
authorSu, Z.
titleImpurity energy level in the Haldane gap.
journalPhys. Rev. B volume53,
pages40 (year1996).
[Hyman and Yang(1997)]hyman1997impurity
authorHyman, R. A. and
authorYang, K.
titleImpurity driven phase transition in the antiferromagnetic spin-1 chain.
journalPhys. Rev. Lett. volume78,
pages1783 (year1997).
[Balatsky et al.(2006)Balatsky, Vekhter,
and Zhu]balatsky2006impurity
authorBalatsky, A. V.,
authorVekhter, I. and
authorZhu, J. X.
titleImpurity-induced states in conventional and unconventional superconductors.
journalRev. Mod. Phys. volume78,
pages373 (year2006).
[Yin et al.(2015)Yin, Wu, Wang, Ye, Gong,
Hou, Shan, Li, Liang, Wu et al.]yin2015observation
authorYin, J. X.
et al.
titleObservation of a robust zero-energy bound state in iron-based superconductor Fe(Te,Se).
journalNat. Phys. volume11,
pages543 (year2015).
[Zheng et al.(2015)Zheng, Deng, Qiu,
Zhong, Yang, and Wang]zheng2015interplay
authorZheng, S. H.
et al.
titleInterplay of quantum impurities and topological surface modes.
journalPhys. Lett. A volume379,
pages2890 (year2015).
[Sun et al.(2016)Sun, Zhang, Hu, Li,
Wang, Ma, Xu, Gao, Guan, Li et al.]sun2016majorana
authorSun, H. H.
et al.
titleMajorana zero mode detected with spin selective Andreev reflection in the vortex of a topological superconductor.
journalPhys. Rev. Lett. volume116,
pages257003 (year2016).
[Qi et al.(2006)Qi, Wu, and
Zhang]qi2006topological
authorQi, X. L.,
authorWu, Y. S. and
authorZhang, S. C.
titleTopological quantization of the spin Hall effect in two-dimensional paramagnetic semiconductors.
journalPhys. Rev. B volume74,
pages085308 (year2006).
[Brouwer et al.(1997)Brouwer, Silvestrov,
and Beenakker]brouwer1997theory
authorBrouwer, P.,
authorSilvestrov, P. and
authorBeenakker, C.
titleTheory of directed localization in one dimension.
journalPhys. Rev. B volume56,
pagesR4333 (year1997).
[Mudry et al.(1998)Mudry, Brouwer,
Halperin, Gurarie, and Zee]mudry1998density
authorMudry, C.,
authorBrouwer, P.,
authorHalperin, B.,
authorGurarie, V. and
authorZee, A.
titleDensity of states in the non-Hermitian Lloyd model.
journalPhys. Rev. B volume58,
pages13539 (year1998).
[Brody(2013)]Brody_2014
authorBrody, D. C.
titleBiorthogonal quantum mechanics.
journalJ. Phys. A: Math. Theor. volume47,
pages035305 (year2013).
[Sukhachov and Balatsky(2020)]sukhachov2020non
authorSukhachov, P. O.
and authorBalatsky, A. V.
titleNon-Hermitian impurities in Dirac systems.
journalPhys. Rev. Research
volume2, pages013325 (year2020).
[Li et al.(2021)Li, Lee, and
Gong]li2021impurity
authorLi, L.,
authorLee, C. H. and
authorGong, J.
titleImpurity induced scale-free localization.
journalCommun. Phys. volume4,
pages1 (year2021).
[Rakagawa(2018)]nakagawa2018non
authorNakagawa, M.,
authorKawakami, N. and
authorUeda, M.
titleNon-Hermitian Kondo effect in ultracold alkaline-earth atoms.
journalPhys. Rev. Lett. volume121,
pages203001 (year2018).
[Yu et al.(2018)]yu2010quantized
authorYu, R.
et al.
titleQuantized anomalous Hall effect in magnetic topological insulators.
journalScience volume329,
pages61 (year2010).
[Reiter et al.(2012)]reiter2012effective
authorReiter, F. and
authorSørensen, A. S.
titleEffective operator formalism for open quantum systems.
journalPhys. Rev. A volume85,
pages032111 (year2012).
[Zhao et al.(2018)]zhao2018topological
authorZhao, H.
et al.
titleTopological hybrid silicon microlasers.
journalNat. Commun. volume9,
pages981 (year2018).
[Mittal et al.(2019)]mittal2019photonic
authorMittal, S.
et al.
titlePhotonic quadrupole topological phases.
journalNature Photonics volume13,
pages692 (year2019).
|
http://arxiv.org/abs/2307.00789v1
|
20230703071118
|
Utilizing wearable technology to characterize and facilitate occupant collaborations in flexible workspaces
|
[
"Kristi Maisha",
"Mario Frei",
"Matias Quintana",
"Yun Xuan Chua",
"Rishee Jain",
"Clayton Miller"
] |
cs.HC
|
[
"cs.HC"
] |
Preprint accepted at CISBAT 2023 - The Built Environment in Transition, Hybrid International Conference, EPFL, Lausanne, Switzerland, 13-15 September 2023
^1 College of Design and Engineering, National University of Singapore (NUS), Singapore
^2 Urban Informatics Lab, Stanford University, USA
^3 Future Cities Laboratory Global, Singapore-ETH Centre, Singapore
^*[email protected]
Hybrid working strategies have become, and will continue to be, the norm for many offices.
This raises two considerations: newly unoccupied spaces needlessly consume energy, and the occupied spaces need to be effectively used to facilitate meaningful interactions and create a positive, sustainable work culture.
This work aims to determine when spontaneous, collaborative interactions occur within the building and the environmental factors that facilitate such interactions.
This study uses smartwatch-based micro-surveys using the Cozie platform to identify the occurrence of and spatially place interactions while categorizing them as a collaboration or distraction.
This method uniquely circumvents pitfalls associated with surveying and qualitative data collection: occupant behaviors are identified in real-time in a non-intrusive manner, and survey data is corroborated with quantitative sensor data.
A proof-of-concept study was deployed with nine hybrid-working participants providing 100 micro-survey cluster responses over approximately two weeks.
The results show the spontaneous interactions occurring in hybrid mode are split evenly among the categories of collaboration, wanted socialization, and distraction and primarily occur with coworkers at one's desk.
From these data, we can establish various correlations between the occurrence of positive spontaneous interactions and different factors, such as the time of day and the locations in the building.
This framework and first deployment provide the foundation for future large-scale data collection experiments and human interaction modeling.
§ INTRODUCTION
Human-to-human interactions play an essential role in defining our society.
Within the office, collaboration and interactions among employees are key determining factors for productivity and success <cit.>.
Specifically, unplanned, spontaneous interactions can provide opportunities for unexpected collaboration and discovery.
Analyses of research laboratories in the US found that during tasks that involved creative problem-solving, discussions with coworkers were essential, and someone's successful performance could be correlated to the breadth of and easy access to their network <cit.>.
These benefits, however, are accompanied by the possibility of interaction being a distraction to one's workflow.
The assessment of interactions is often accompanied by an analysis of spatial planning: an open plan may allow for more collaboration but also introduce noise and a lack of privacy, or the opposite for closed cubicles.
The COVID-19 pandemic has altered the paradigm of human interactions within workspaces.
Price Waterhouse Cooper's 2021 US Remote Work survey found that only 21% of employers felt that employees needed to be in the office full time, and 55% of employees wanted to work remotely at least three days per week, but 87% of employees still felt that the office space was essential for collaboration <cit.>.
Space utilization and teamwork among employees are significant concerns for companies.
A recent study shows that when comparing interactions conducted remotely rather than in person, a switch to remote work often leads to a smaller collaboration network, indicating less knowledge transfer <cit.>.
In-person and synchronous communication (such as video calls) is better for more complex information, reaching a common understanding, and community building, but when remote, most people switch to asynchronous communication (messages and emails).
This study also indicated that a shift to remote work could lead to reduced productivity and innovation, leading to the following questions:
Given the hybrid work dynamics in a post-COVID-19 world, how can we identify quality interactions among employees?
How can existing building stock be accurately optimized to support quality occupant interactions?
§.§ Identifying interactions in the workplace
The occurrence of occupant interactions has previously been studied and predicted using sensor data.
The activity state of the plug-in devices at one's desk can be used to infer whether someone is at their workstation <cit.>.
When more than one occupant is away from their desk, interactions can occur, creating a model that reflects potential employee interactions.
Another method uses environmental data, such as noise levels, CO2 levels, or temperature, to predict the occurrence of interactions <cit.>.
However, the occurrence of an interaction in these studies does not indicate whether the interaction was meaningful.
Optimizing our buildings to best support quality interactions and best advise people's habits and behaviors requires a more nuanced and subjective understanding of how each individual perceives the impact of their interactions.
In other words, humans must become the sensor and provide data on their opinions and feelings.
The smartwatch application Cozie (https://cozie-apple.app/cozie-apple.app/) was developed as a data collection platform that uses micro-ecological momentary assessments (micro-EMAs) to collect right-here-right-now user perceptions of their environment <cit.>.
In the context of occupant interactions, this means a near-immediate recall of recent interactions and their associated value.
Cozie has also been effectively used to assess occupant experiences with thermal comfort, movement, infectious disease risk, privacy, noise distractions, and to nudge occupants to make decisions <cit.>.
This work presents and pilots a method to use the Cozie app and surveying to establish a pattern of interactions within office spaces.
This method uniquely provides the ability to identify and differentiate interactions based on the value they provide to a person's work and personal life.
This data allows for correlations between collaborative, spontaneous interactions and different factors, such as the time of day and the locations in the building.
In addition, we understand which general topics are most prevalent (work, personal life, etc.) as well as the perceived value of these interactions from the occupants themselves.
§ METHODOLOGY
Four data collection methods were developed for this study: an onboarding survey at the start of the study, micro-EMAs surveys, an end-of-day survey, and an exit interview that was tailored to each participant's individual survey responses. The onboarding survey aims to gain a baseline understanding of each participant's schedule and work patterns as well as their feelings regarding the value of in-person work.
Figure <ref> shows the micro-EMA survey developed to assess whether a spontaneous interaction occurs and the circumstances around that interaction.
The questions within the red dashed box gather data on the subjective details of the interaction, while the other questions identify the objective characteristics.
These subjective questions aim to determine whether the interaction positively impacts the participant's work performance and are being compared to identify the most effective language.
The final screen of the micro-survey prompts the user to complete the flow again for any other spontaneous interactions that occurred after the previous submission.
The set of responses given at one time is referred to as a micro-survey cluster.
The end-of-day survey, designed to take less than five minutes, validates the interaction count found through the micro-EMA surveys and asks the participant to reflect on the interactions' overall value and general topics.
The three question sets described above were reviewed by external researchers in the field.
Finally, in the exit interview, participants are asked to provide their thought processes as they complete the surveys and explain their feelings on the accuracy of the questions.
This step included discussions of the participant's definition of subjective terms, such as valuable and quality, and discussions of their confidence in interaction recall when answering the micro-EMA and end-of-day survey.
§.§ Experimental deployment
This pilot study aims to assess the efficacy of the developed question flow and surveys and determine the relevance of the collected data.
Nine participants were selected to wear an Apple smartwatch for approximately two weeks and submit at least 100 micro-survey clusters.
All participants had flexible, hybrid work schedules with a workspace in the same Singaporean office building, though potentially in different spaces.
Reminders to complete a micro-survey cluster were given every hour on their iPhone and Apple Watch from 9:00 AM to 7:00 PM.
At 7:00 PM, participants were also reminded to complete the end-of-day survey.
Beyond the participant responses, the experiment collected data on heart rate, step count, and noise levels from the Apple Watch's built-in sensors.
§ RESULTS AND DISCUSSION
The micro-survey was generally effective in contextualizing interactions, particularly for the objective questions.
Figure <ref>a displays the response breakdown for the objective questions.
About half of the micro-survey responses were given when a spontaneous interaction had occurred.
Out of these, the primary response of with whom and where the interactions occurred was with coworkers at the participant's desk.
This indicates that a significant pool of the responses is relevant to an occupant's work experience in the office.
Participants felt confident that the recall period of one hour was appropriate for remembering interactions easily.
Additionally, they felt that most aspects of the interaction and its impact on them were covered with the survey questions.
Figure <ref>b shows the results of the subjective micro-survey questions (those outlined with a red dashed line in Figure <ref>) that were designed to address how the interaction affected the participant.
These results demonstrate that a significant portion of collaborations are deemed valuable, and a significant portion of distractions are not, as expected, but the wanted socialization category is more evenly split.
Of the 90 Wanted socialization interactions, 51% were said not to impact the participant's focus, while the remaining 49% partially or fully impacted the participant's focus.
These near-even splits indicate that the question of categorizing the interaction is not redundant and provides a unique dimension of the interaction's impact.
The results also show that whether the participant initiated the interaction or not has very little correlation to whether or not the interaction was valuable or if the interaction impacted their focus.
It was expected that if an interaction was deemed valuable, the participant would not consider it to have an impact on their focus.
Out of the 211 valuable interactions, half of them (41%) had no impact on the participants' focus, while the other half (59%) interactions at least partially impacted their focus, meaning that there is actually not a strong correlation between value and impact on focus.
For the end-of-day survey (results not visualized), participants felt varying levels of confidence in answering the questions.
For some, recalling interaction counts for the entire day was difficult.
The split of topics was usually divided somewhat evenly between work and personal life, with a slightly higher percentage of work topics.
One participant indicated that they defaulted to 60-70% of interactions being work-related and the remaining being personal life-related unless something significantly different occurred during the day.
The participants did, however, indicate that the end-of-day survey reflected the overall interaction patterns of the day, which the participants felt was personally interesting.
A few of the participants also indicated that their perception of the interactions when completing the end-of-day survey differed from their impression at the moment when completing the micro-survey.
Data and code from this deployment are found in the following open GitHub repository: <https://github.com/buds-lab/occupant-interactions-workspaces>
§.§ Suggestions for methodological improvement for future studies
In the onboarding session, providing a more in-depth understanding of what each question is asking for could be valuable.
While the goal is to obtain information on the experience of the participant, participants were not always confident about their selections, which could lead to more frustration during the experimental process.
In the exit interview, participants were asked to consider an interaction that they had experienced and describe their thought process at each question.
Including this exercise in the onboarding would be valuable, where the participant can demonstrate how they plan to approach each question.
This would allow the researcher to determine any glaring interpretation issues as well as give the participant confidence in their understanding, making the survey process smoother and less cognitively intensive.
A similar walkthrough with the end-of-day survey could also eliminate some of the confusion and provide the participants with more substantial reasoning behind each question.
For the questions themselves, multiple participants indicated an interest in a question asking about the duration of the interaction.
This could be a better indicator of the impact on focus or value depending on the interaction categorization while also providing a benchmark for what extent of conversation is considered an interaction.
Additionally, a phrasing change should be considered for the option of Home when asking about the location to reflect better the intended meaning of leisure time in one's home and Household member to separate out further one's partner, which is currently usually categorized under other by the participants.
Beyond the pain point of terminology, some participants expressed that the working hours required by the study were a bit bothersome as they started work later or ended earlier.
Changing the parameters of each participant's work hours could also reduce potential frustrations.
Another suggestion is related to the fact that terms such as valuable, focus, and quality are defined differently from person to person.
While it is indicated above that some level of clarification and defining would be beneficial for the participant, it is also necessary to balance the need for an authentic appraisal of the interaction's impact, which could be lost with over-prescription.
It is also important to note that the definitions of such terms vary depending on the participant's mood or current schedule.
For instance, to the question, Did this interaction impact your focus?, one participant never answered yes despite marking the interaction as a distraction because the participant felt that they were currently not in a state of intense focus.
Other participants indicated that whether an interaction was valuable could vary depending on the mood that they were in that day.
In a similar vein, when categorizing the interactions, some participants considered almost all non-work-related interactions to be distractions.
In contrast, other participants felt that virtually no interaction was purely a distraction and rarely selected it.
Figure <ref>c demonstrates these participant differences.
Participants #1, 2, and 6 rarely, if ever, selected distraction, as they stated that some conversation with colleagues is beneficial, and they had a higher threshold for what might be distracting.
In contrast, Participants #5, 7, and 8 have relatively fewer spontaneous collaborations and tend to categorize non-work-related interactions as distractions more readily.
These different profiles lend credence to the inclusion of multiple answer choices (i.e., distraction and wanted socialization) and multiple layers of subjective questions.
§ CONCLUSIONS
This study details developing and testing a method for identifying and assessing interactions during the workday, particularly within the physical office space.
A question flow was established and implemented into the Cozie app to collect information on where and with whom an interaction is occurring, as well as the impact of the interaction on the occupant's personal and work life.
An onboarding survey, end-of-day surveys, and exit interviews supported this question flow.
The method was tested with nine participants working within the same office building and was shown to capture the quantity and impact of interactions successfully.
It also highlighted the differences in how people interpret that impact, which should be incorporated into the experimental design of more significant future deployments.
In future work, the method can be aided with other sensor-based data, such as noise levels or workspace occupancy, to assess better when interactions are occurring.
§ ACKNOWLEDGEMENTS
This research has been supported by the following Singapore Singapore Ministry of Education (MOE) Tier 1 Grants: A-0008305-01-00, A-0008301-01-00, and A-8000139-01-00. Kristi Maisha gratefully acknowledges financial support for this research by the Fulbright U.S. Student Program, which is sponsored by the U.S. Department of State.
§ REFERENCES
elsarticle-num
|
http://arxiv.org/abs/2307.02773v1
|
20230706044947
|
SeLiNet: Sentiment enriched Lightweight Network for Emotion Recognition in Images
|
[
"Tuneer Khargonkar",
"Shwetank Choudhary",
"Sumit Kumar",
"Barath Raj KR"
] |
cs.CV
|
[
"cs.CV",
"cs.HC"
] |
SeLiNet: Sentiment enriched Lightweight Network for Emotion Recognition in Images
Tuneer Khargonkar1,
Shwetank Choudhary2,
Sumit Kumar3,
Barath Raj KR4
Samsung R&D Institute, Bangalore, India
Email: {1t.khargonkar, 2sj.choudhary, 3sumit.kr, 4barathraj.kr}@samsung.com
August 1, 2023
===================================================================================================================================================================================================
In this paper, we propose a sentiment-enriched lightweight network SeLiNet and an end-to-end on-device pipeline for contextual emotion recognition in images. SeLiNet model consists of body feature extractor, image aesthetics feature extractor, and multitask learning-based fusion network which jointly estimates discrete emotion and human sentiments tasks. On the EMOTIC dataset, the proposed approach achieves an Average Precision (AP) score of 27.17 in comparison to the baseline AP score of 27.38 while reducing the model size by >85%. In addition, we report an on-device AP score of 26.42 with reduction in model size by >93% when compared to the baseline.
§ INTRODUCTION
Understanding the emotional states of the people in the images is an emerging research area in the domain of computer vision. Ability to correctly perceive emotions can help improve human-computer interactions. In the case of smartphones, several use cases can be built such as queries based on-device image search, dynamically uncluttering the notification panel based on user emotions, etc. Further, it has other advanced applications like modeling robot behavior as per the perceived emotion of the user.
Conventionally, researchers have used facial expressions <cit.> based features to process human emotions. Recently, scientific studies have established that perception of emotions is also influenced by context <cit.> such as background scene <cit.>, body posture <cit.>, image composition <cit.>, gait analysis <cit.> etc. Several previous works have achieved better performance by considering these contexts.
Previous studies show that image aesthetics assessment <cit.> is a crucial cue to understand the emotions evoked by the images. Aesthetics response towards images may depend upon many components such as composition, colorfulness, spatial organization, emphasis, motion, depth, or presence of humans <cit.>. Traditional works have used low-level handcrafted visual features <cit.> to understand the aesthetics and related image emotions. Recent works based on deep learning <cit.> extract mid and global-level features such as composition, semantics, and emphasis <cit.> to classify image emotions. These works try to understand human emotions evoked by the pictures and are able to achieve improved results by considering the aesthetics properties of the images. Understanding the composition and semantics of the images can help capture the high-level contextual properties like the object’s spatial organization, the relationship between various local level features, etc., and thus can also be beneficial to the task of recognizing the emotional states of people in the images. Image aesthetics assessment has an impact on human sentiment also. It may be either positive, negative, or neutral. For example, images that convey a pleasant mood are generally rated high on the aesthetics scale, and vice-versa. Such images are also known to elicit positive emotions. Taking inspiration from these discussed studies, we explore image aesthetics assessment-based features along with body features to understand the sentiment and emotional states of the people in the images.
Several studies show that privacy is one of the leading concerns among people <cit.>. With the rise in ownership of smartphones <cit.>, this concern is particularly high among smartphone users <cit.>. To this end, we present an end-to-end on-device novel pipeline consisting of the sentiment-enriched lightweight model called SeLiNet for human emotion understanding from the image.
Our main contributions can be summarized as:
* We propose a sentiment-enriched lightweight model SeLiNet and end-to-end pipeline for on-device emotion recognition in images which achieves comparable average precision(AP) score to the baseline system with a significant reduction in model size (85% ↓) and inference time(78% ↓).
* We demonstrate that the image aesthetics features contribute in improving the overall performance of the task of emotion recognition in images.
* We conceptualize the problem as multitask learning-based and make predictions for discrete emotions and related sentiments. We then use sentiment knowledge in the post-processing to enhance emotion predictions and show improved results.
§ RELATED WORK
Emotion recognition is a well-studied task in the vision field. Traditional works have used hand-crafted features <cit.> for the emotion recognition task. Deep learning networks have taken into account facial expressions <cit.>, gait analysis <cit.> and body posture <cit.> etc. based features to predict emotions. <cit.> proposed facial expressions based on compound emotion detection such as ’happily disgusted’ or ’angrily surprised’ and thus provide deeper insights about expressed emotions. While most of these works have modeled emotion detection tasks as the categorical problem, some have tried to use the valence, arousal, and dominance (VAD) model <cit.> based on continuous emotional space.
Recently, several works have also demonstrated the importance of contextual cues in interpreting emotional states. <cit.> presents two-stream encoding networks capturing facial and contextual features, followed by a fusion network to predict context-aware emotion recognition. <cit.> also proposes similar two-stream architecture in which one stream captures body features and the other stream captures scene context from the image. A fusion network consisting of both body and scene context features is jointly used to predict discrete emotion and VAD scores. They also create and publish the EMOTIC (from EMOTions In Context) Dataset and provide a CNN-based baseline system on the same dataset. <cit.> proposes context aware multi-modality-based network to predict emotion. They use several context-based modalities such as the face, gait analysis, semantic context, and depth maps to model socio-dynamic interactions among agents.
In this work, we get inspiration from <cit.> to design our lightweight network and baseline this work for the comparison of our proposed approach on the EMOTIC dataset. <cit.> uses resnet50 as a feature extractor for both body and scene context. In contrast, we use the pretrained mobilenetV3_large model for extracting body features and a lightweight composition-based aesthetics feature extractor (ReLIC <cit.>) to keep the model size small. This baseline <cit.> predicts emotion and VAD scores simultaneously. We, instead make sentiment and emotion predictions together and use sentiment predictions in the post-processing module to improve emotion prediction performance. <cit.> reports an AP score of 35.48 on the EMOTIC dataset. However, they have used several deep neural networks to derive multiple modalities-based contexts, making the overall architecture complex for training and inferencing. Architecture is also computationally intensive, with an overall model size of >500MB (includes face detection model size), making it unsuitable for low-resource devices such as smartphones.
We have used the aesthetics feature extractor as one of the branches for our SeLiNet model. Several previous works have shown that there is an explicit connection between image aesthetics and image emotion. <cit.> have used emotion-assisted image aesthetics identification using multitask learning. They demonstrate that there is a link between image emotion and aesthetics, and that image emotion features can aid in aesthetics assessment tasks and vice versa. <cit.> use image semantics, image aesthetics, and other visual features to effectively classify the emotion types.
§ DATASET
We train and report our proposed approach performance on EMOTIC <cit.> dataset. EMOTIC is a benchmark dataset for the context-aware emotion recognition task. It is created by taking images from MSCOCO dataset <cit.>, Ade20k dataset <cit.> and images downloaded from the Google search engine, which makes the overall dataset very diverse and increases its complexity. The dataset contains roughly 23,000 images and around 33,000 annotated instances of emotions. The dataset provides bounding box information of the target person in each image and the same has multi-label annotation with 26 possible emotion categories information of the bounding box. Dataset also has annotations for continuous emotion index such as Valance, Arousal, and Dominance (VAD). Emotions are quantified on these three indexes with their scales ranging from 0 to 10. In our work, we have only considered discrete emotions as ground truth and instead added sentiment prediction as an additional task. Since the EMOTIC dataset does not provide sentiment labeling, we use a study by <cit.> to label the sentiment of each image based on the ground truth emotion. <cit.> shows each emotion can be categorized into positive, negative, and neutral sentiments. So, we label possible sentiment for each image in the dataset. We use a multi-label strategy to label sentiment because each image can have multiple emotion labels and these labels can fall under more than one sentiment. Table <ref> shows a few examples of ground truth emotion labels and corresponding one-hot encoding for the sentiment label. We report all our evaluation results on the test set of the EMOTIC dataset.
§ THE PROPOSED METHOD
This section details the motivation behind the problem and our proposed method.
§.§ Motivation and Problem Solving
Although multi-modality-based information <cit.> improves the performance of emotion task, the inclusion of these additional information makes the overall model architecture complex <cit.> and expensive in computation and memory and thus making it unfit for resource constraints systems such as mobile phones. Also, very few works focus on lightweight architectures suitable for the on-device system. Therefore, in this work, we attempted to develop a lightweight model for on-device inferencing. For our method, we derive the idea to employ image aesthetics for the emotion recognition task based on studies discussed in Section <ref>.
We model the problem in this paper as multitask learning, which predicts both emotions and sentiments. The main idea behind predicting sentiments as an auxiliary task is: 1) To provide an additional loss factor to the emotion task during training in case of incorrect sentiment prediction; 2) To use the sentiment score in post-processing to further enhance the main task (emotion task) performance. It is also possible to infer only the sentiments of an image using the proposed multitask as standalone predictions.
§.§ The Pipeline
§.§.§ Pre-processing
The EMOTIC training dataset is highly imbalanced, with certain classes such as engagement, happiness, excitement, etc. occurring more frequently than classes like anger, and aversion. We use standard data augmentation techniques for images such as HorizontalFlip, RandomBrightnessContrast, Posterize, HueSaturationValue, etc. to address the same.
§.§.§ SeLiNet Model
Figure <ref> shows our SeLiNet architecture and end-to-end pipeline. The proposed SeLiNet model consists of a body module, aesthetics module, and fusion module which are discussed below in detail.
Body feature extractor : This branch focuses on extracting facial and body features from the input image. Extracting these features is important because they provide crucial information about the emotional state of the person in the image. The branch is based on mobilenetV3_large, trained on the ImageNet dataset with person class. We freeze weights till the second last layer and take its output with a feature map of size (960*7*7) which is then fed to a self-attention network. The attention layer outputs an attentive vector of size 960 which is followed by a dense layer of 512 units whose output is then passed to the fusion model for further processing.
Aesthetics feature extractor: The aesthetics feature extractor uses the pretrained ReLIC architecture <cit.> as a backbone to extract image aesthetics features. ReLIC architecture <cit.> is based on several Convolutional Neural Networks and tries to learn both local and global features. Local features are used to understand image composition whereas global features contribute toward the overall image properties such as texture etc. We freeze weights till second last layer and take its output as a feature map of (1280*7*7) and apply self-attention to get attentive feature vectors. The aesthetics branch outputs a 1280-size vector which is followed by a dense layer of 512 units. The output of the dense layer is then fed to the fusion model.
Fusion model : The fusion layer concatenates the output of the body and aesthetics feature extractors to get a 1024-size fused vector. This concatenation layer is then followed by two dense layers of 512 and 256 units. This last 256 dense layer is followed by two task-specific dense layers each of 128 units whose outputs are fed to emotion and sentiment classification layers respectively for the predictions. We perform detailed hyperparameter tuning to choose layers of the fusion model.
§.§.§ Post-processing
We use a boosting algorithm to modify the confidence score of the emotion prediction based on the sentiment output. We consider the top 5 confidence scores by the emotion task E_i as E = E_1, E_2, E_3, E_4,E_5 and predictions by sentiment task S_j as S = S_1, S_2, S_3. Then the boosting equation is as follows.
E_i = sigmoid(E_i + S_j) , where E_i ∈ S_j
Our boosting factor S_j provides a relative boost to all emotions in E_i that are predicted correctly in accordance with sentiment output.
§ RESULTS AND EXPERIMENTS
This section describes the implementation details, comparison with previous works, and ablation study for our proposed approach.
§.§ Implementation details
We use the PyTorch framework for experimentation and model development. All training and testing are carried out on an Nvidia GPU GeForce GTX 1080 Ti 11178 MB card. The aesthetics branch takes the complete image of size 224 * 224 * 3 as input. The body branch, on the other hand, requires a 128 * 128 * 3 input image which is a cropped portion of the original image containing the whole body. We set the batch size to 26 and use stochastic gradient descent(SGD) optimizer in the training. The learning rate is initialized to 0.001 with a decay rate of 0.1. The model is trained for 25 epochs and is saved based on the best validation AP score.
Loss Function : Since our problem statement is a multi-class multi-label on the EMOTIC dataset, we experimented with standard binary cross entropy(BCE) loss and L2 loss (suggested by <cit.>). We observe that L2 loss gives better results than BCE.
loss = ∑ (Y_(i)actual - Y_(i)pred)^2 * W_i
Where W_i is dynamic weights per batch which are defined as:
W_i =
1/total true emotions present, if emotion is true
0.0001, otherwise
The combined loss of emotion and sentiment task is referred to as the total loss. Based on the experiments, we set λ equals to 0.8 which gives better results.
L_total = λ L_emotion + (1- λ) L_sentiment
§.§ Comparison with previous works
As shown in Table <ref>, <cit.> reports an AP score of 27.38 on the emotion recognition task using a CNN-based baseline system. We try to reproduce their work using the code available on Github [https://github.com/Tandon-A/emotic]. Using the same configuration discussed in the work, we get an AP score of 25.38 with a model size of nearly 191MB and an inference time of 16.2 ms on GPU. In contrast, using our approach, we achieve an AP score of 27.17 with a model size of only 28.03 MB, a reduction of 85.32% when compared to the baseline. Our approach is faster by nearly 78% compared to the baseline. In Table <ref>, we also provide a comparison of our proposed model with other works. Although <cit.> shows better performance, it involves more than three modalities as input to the model making the overall system complex in computation and training and cost-intensive in terms of memory and inference time. In comparison, our work provides for the lightweight model with only two modalities as input and gives comparable performance to the baseline.
For our sentiment sub-task on the EMOTIC dataset, we achieve an AP score of 93.53, 73.82, and 19.13 for Positive, Negative, and Neutral sentiment respectively. Positive and Negative sentiments report better performance compared to Neutral. It is due to the small number of emotions categorized in the neutral sentiment leading to a lower training sample.
§.§ Ondevice Performance
Our end-to-end on-device pipeline is evaluated on a Samsung S21 smartphone (Android SDK 30, 12 GB RAM, 256 GB ROM, Octa-core, Exynos2100 chip). SeLiNet model, quantized using the PyTorch framework, reports an on-device AP score of 26.42 on the same test set of EMOTIC dataset with a model size of 11.34 MB and on-device inference time of 65 ms. Although the AP score drops marginally by nearly 2.76 % due to quantization, there is a reduction of model size ( of the quantized model) by more than 52% which is a huge gain. Also, in comparison to the baseline system, <cit.>, we achieve a comparable AP score by quantized model while reducing the model size by nearly 93%.
§.§ Ablation Study
Table <ref> describes the ablation study regarding the different configurations of the architecture. The addition of the aesthetics branch results in a nearly 12% increase in AP score. Our proposed multi-task learning has improved the AP score by at least 3%. Thus, it demonstrates that all of the components are required to achieve the best results.
§.§ Error Analysis
We observe that the performance of our model slightly degrades for the categories such as embarrassment, surprise, yearning, etc. compared to <cit.>. Possible reasons are 1) The number of original training images is insufficient to learn diverse features. 2) Owing to complex nature of these emotions, additional cues may be required. <cit.> demonstrates that taking into account multiple modalities leads to better predictions.
The difficult nature of the emotion recognition task and the requirement of multiple contexts are supported by the study conducted by <cit.> where if only the facial context is considered out of three discussed contexts, then the AP score drops from 35.48 to 24.06. The baseline by <cit.> has used Resnet50 for both body and scene context feature extraction. But, when we replace Resnet50 with ResNet18, the AP score falls to 17.23, indicating that shallower models are insufficient for the task. Considering the above details, SeLiNet performs fairly well despite being lightweight.
§.§ Conclusion
We present a lightweight model SeLiNet and an end-to-end pipeline to predict the emotional states of people in images for on-device inferencing. Our proposed approach achieves an AP score of 27.17, which is comparable to the baseline system with a 85% smaller memory footprint and much faster inference time. We also show that aesthetics assessment of the images can be helpful information to understand image emotion. Using multitasking learning, we further improve our model results. In future work, we will like to capture additional contextual information such as object detection, deeper semantic analysis, and its impact on image emotion recognition tasks.
1
1 Joel Aronoff. How we recognize angry and happy emotion in people, places, and
things. Cross-cultural Research - CROSS-CULT RES, 40:83–105, 2006.
2 Osten Axelsson. Towards a psychology of photography: Dimensions underlying aesthetic appeal of photographs. Perceptual and motor skills, 105:411–34, 2007.
3L. F. Barrett. How emotions are made: The secret life of the brain. Houghton Mifflin Harcourt, 2017.
4 Lisa Barrett, Batja Mesquita, and Maria Gendron. Context in emotion perception.
Current Directions in Psychological Science, 20:286–290, 2011.
5 Qiuyu Chen, Wei Zhang, Ning Zhou, Peng Lei, Yi Xu, Yu Zheng, and Jianping Fan. Adaptive fractional dilated convolution network for image aesthetics assessment. 2020.
6 Yunlong Chen, Yuanyuan Pu, Zhengpeng Zhao, Dan Xu, Man, and Wenhua Qian. Image aesthetic assessment based on emotion-assisted multi-task learning network. Association for Computing Machinery, 2021.
7 Shichuan Du and Aleix Martinez. Compound facial expressions of emotion: From basic research to clinical applications. Dialogues in Clinical Neuroscience, 17:443–455, 2015.
8 Wadzani Gadzama, Bitrus Joseph, and Ngubdo Aduwamai. Global smartphone ownership, internet usage and their impacts on humans. 2019.
9 Magrizef Gasah and Aslina Baharum. A conceptual framework for emotional connection towards e-learning mobile application design for children. Journal of Software and Systems Development, 2018:1–17, 2018.
10 Alan Hanjalic. Hanjalic, a.: Extracting moods from pictures and sounds: towards truly personalized tv. ieee signal processing magazine 23, 90-100. Signal Processing Magazine, IEEE, 23:90 – 100, 2006.
11 Dhiraj Joshi, Ritendra Datta, Elena Fedorovskaya, Quang-Tuan Luong, J.Z. Wang, Jia Li, and Jiebo Luo. Aesthetics and emotions in images. Signal Processing Magazine, IEEE, 28:94 – 115, 2011.
12 Ronak Kosti, Jose M. Alvarez, Adria Recasens, and Àgata Lapedriza. Context based emotion recognition using emotic dataset. 2020.
13 Jiyoung Lee, Seungryong Kim, Sunok Kim, Jungin Park, and Kwanghoon Sohn. Context-aware emotion recognition networks. pages 10142–10151, 2019.
14 Zisheng Li, Jun-ichi Imai, and Masahide Kaneko. Facial-component-based bag of words and phog descriptor for facial expression recognition. The Journal of The Institute of Image Information and Television Engineers, 64:1353 – 1358, 2009.
15 Takahiko Masuda, Phoebe Ellsworth, Batja Mesquita, Janxin Leu, Shigehito Tanida, and Ellen Veerdonk. Placing the face in context: Cultural differences in the perception of facial emotion. Journal of Personality and Social Psychology, 94:365–381, 2008.
16 Trisha Mittal, Pooja Guhan, Uttaran Bhattacharya, Rohan Chandra, Aniket Bera, and Dinesh Manocha. Emoticon: Context-aware multimodal emotion recognition using frege’s principle. pages 14222–14231, 2020.
17 Costanza Navarretta. Individuality in Communicative Bodily Behaviours, pages 417423. 2012.
18 Mihalis A. Nicolaou, Hatice Gunes, and Maja Pantic. Continuous prediction of spontaneous affect from multiple cues and modalities in valence-arousal space. T. Affective Computing, 2:92–105, 2011.
19 M Pantic and Léon Rothkrantz. Expert system for automatic analysis of facial expression. Image and Vision Computing, 18:881–905, 2000.
20 Gabriele Peters. Aesthetic primitives of images for visualization. pages 316–325, 2007.
21 Tianrong Rao, Min Xu, and Dezhi Xu. Learning multi-level deep representations for image emotion classification. 2016.
22 Dillip Kumar Rath and Ajit Kumar. Information privacy concern at individual, group, organization, and societal level - a literature review. 2021.
23 Ruthger Righart and Beatrice Gelder. Rapid influence of emotional scenes on encoding of facial expressions: An erp study. Social cognitive and affective neuroscience, 3:270–8, 2008.
24 Noora Sami and Mohamed E. The impact of privacy concerns and perceived vulnerability
to risks on users privacy protection behaviors on sns: A structural equation model.
International Journal of Advanced Computer Science and Applications, 7, 2016.
25 Caifeng Shan, Shaogang Gong, and Peter Mcowan. Facial expression recognition based on local binary patterns: A comprehensive study. Image and Vision Computing, 27: 803–816, 2009.
26 Janice Sipior, Burke Ward, and Linda Volonino. Privacy concerns associated with smartphone use. Journal of Internet Commerce, 13:177–193, 2014.
27 Mohammad Soleymani, Sadjad Esfeden, Yun Fu, and Maja Pantic. Analysis of eeg signals and facial expressions for continuous emotion detection. IEEE Transactions on Affective Computing, 7:1–1, 2015.
28 Hossein Talebi and Peyman Milanfar. Nima: Neural image assessment. IEEE Transactions on Image Processing, PP, 2017.
29 Matina Tsavli, Pavlos Efraimidis, Vasilios Katos, and Lilian Mitrou. Reengineering the user: Privacy concerns about personal data on smartphones. Information and Computer Security, 23:394–405, 2015.
30 Luc Van Gool and Beatrice Gelder. Recognizing emotions expressed by body pose: A biologically inspired neural model. Neural networks : the official journal of the International Neural Network Society, 21:1238–46, 2008.
31 WeiningWang and Qianhua He. A survey on emotional semantic image retrieval. pages 117 – 120, 2008.
32 Munan Xu, Jia-Xing Zhong, Yurui Ren, Shan Liu, and Ge Li. Context-aware attention network for predicting image aesthetic subjectivity. pages 798–806. ACM, 2020.
33 Shihao Xu, Jing Fang, Xiping Hu, Edith Ngai, Yi Guo, Victor Leung, Jun Cheng, and Bin Hu. Emotion recognition from gait analyses: Current research and future directions. 2020.
34 Minghui Zhang, Yumeng Liang, and Huadong Ma. Context-aware affective graph reasoning for emotion recognition. pages 151–156, 2019.
35 Lin Zhao, Meimei Shang, Fei Gao, Rongsheng Li, Fei Huang, and Jun Yu. Representation learning of image composition for aesthetic prediction. Computer Vision and Image Understanding, 199:103024, 2020.
36 Sicheng Zhao, Yue Gao, Xiaolei Jiang, Hongxun Yao, Tat-Seng Chua, and Xiaoshuai Sun. Exploring principles-of-art features for image emotion recognition. MM 2014 - Proceedings of the 2014 ACM Conference on Multimedia, pages 47–56, 2014.
37 Lin Zhong, Qingshan Liu, Peng Yang, Bo Liu, and Dimitris Metaxas. Learning multiscale active facial patches for expression analysis. volume 45, pages 2562–2569, 2012.
38 T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, et al., "Microsoft coco:Common objects in context", ECCV, pp. 740-755, 2014.
39 B. Zhou, H. Zhao, X. Puig, S. Fidler, A. Barriuso and A. Torralba, "Semantic understanding of scenes through the ade20k dataset", arXiv preprint arXiv:1608.05442, 2016
|
http://arxiv.org/abs/2307.00427v1
|
20230701211747
|
Primal-Dual Gradient Methods for Searching Network Equilibria in Combined Models with Nested Choice Structure and Capacity Constraints
|
[
"Meruza Kubentayeva",
"Demyan Yarmoshik",
"Mikhail Persiianov",
"Alexey Kroshnin",
"Ekaterina Kotliarova",
"Nazarii Tupitsa",
"Dmitry Pasechnyuk",
"Alexander Gasnikov",
"Vladimir Shvetsov",
"Leonid Baryshev",
"Alexey Shurupov"
] |
math.OC
|
[
"math.OC"
] |
1, *]Meruza Kubentayeva
1]Demyan Yarmoshik
1, 2]Mikhail Persiianov
2,3]Alexey Kroshnin
1]Ekaterina Kotliarova
1]Nazarii Tupitsa
1]Dmitry Pasechnyuk
1, 2, 3]Alexander Gasnikov
1, 4]Vladimir Shvetsov
4]Leonid Baryshev
4]Alexey Shurupov
[1]Moscow Institute of Physics and Technology
[2]Institute for Information Transmission Problems RAS
[3]Higher School of Economics
[4]Russian University of Transport
[*]Corresponding author: mailto:[email protected]@gmail.com
Primal-Dual Gradient Methods for Searching Network Equilibria in Combined Models with Nested Choice Structure and Capacity Constraints
[
======================================================================================================================================
We consider a network equilibrium model (i.e. a combined model), which was proposed as an alternative to the classic four-step approach for travel forecasting in transportation networks. This model can be formulated as a convex minimization program. We extend the combined model to the case of the stable dynamics (SD) model in the traffic assignment stage, which imposes strict capacity constraints in the network. We propose a way to solve corresponding dual optimization problems with accelerated gradient methods and give theoretical guarantees of their convergence. We conducted numerical experiments with considered optimization methods on Moscow and Berlin networks.
Keywords: forecasting, combined model, trip distribution, traffic assignment, capacity constraints, gradient method
§ INTRODUCTION
One of the most popular approaches to travel forecasting in transportation networks is the four-step procedure <cit.>: sequential run of trip generation, trip distribution, modal split, and traffic assignment stages. However, this approach has a number of limitations, e.g. there is no convergence guarantee <cit.>.
To overcome this issue, there were proposed network equilibrium models (NE / combined models) which can be formulated as an optimization or, more generally, a variational inequality problem <cit.>. In particular, <cit.> reduced the problem of searching equilibrium in the case of one transport mode to a convex optimization problem, combining trip distribution and route assignment models. <cit.> made an extension to the multi-modal case, where destination and mode are chosen simultaneously with the same value of a calibration parameter. The first mathematical formulation of a network equilibrium model with hierarchical destination and mode choices was proposed by <cit.> — the approach was presented for modelling nested choice structure of trips using several modes (e.g. park'n ride trips). <cit.> formulated a nested combined model where mode choice is conditioned by destination choice and demonstrated its application for the Stockholm region. The recent works <cit.>, <cit.>, and <cit.> proposed the extensions of the combined models for the cases of modeling trip frequency, remote park-and-ride, and tourism demand, respectively.
Finding a solution in trip distribution and traffic assignment problems — whether they are considered separately in the four-step approach or combined into one network equilibrium problem — relies on numerical methods for convex optimization.
E.g., a classic choice for the traffic assignment problem (which is the most computationally expensive part) is the Frank–Wolfe algorithm <cit.>, and for the trip distribution problem it is the Sinkhorn algorithm <cit.>. A class of path-based algortihms can be an alternative to the link-based Frank–Wolfe algorithm for solving traffic assignment problem: <cit.>, <cit.>, <cit.>.
A popular choice for solving an optimization problem in the above-mentioned combined models is a partial linearization algorithm of <cit.> and its modifications for multi-modal and multi-user cases <cit.>.
Recently, in <cit.>, improvements of these algorithms were presented.
Also, in subsequent years there have been developed a lot of new optimization methods, in particular, accelerated gradient methods <cit.>, which can be applied to the described problems.
Another direction of research on travel modelling in recent years is related to capacitated transportation networks, which allow to overcome some limitations of the standard Beckmann traffic assignment model (<cit.>).
For the best of our knowledge, there is no works considering the application of accelerated gradient methods to combined models.
In this paper, we consider:
* an entropy-based trip distribution model with hierarchical choice structure <cit.>;
* the Beckmann traffic assignment model with inelastic demand <cit.>;
* the stable dynamics traffic assignment model, where resulting flow distribution satisfy the network’s capacity constraints <cit.>;
* an NE model combining all the models mentioned above.
In the last case, we consider the nested combined model proposed in <cit.>, where transit and road networks are independent, and the transit network has constant travel costs. We extend it to the case of the stable dynamics model for traffic assignment.
We employ accelerated primal-dual gradient methods to solve corresponding optimization problems and compare their performances to the classic Sinkhorn, Frank–Wolfe, and generalised Evans algorithms. Also, we provide theoretical guarantees for their convergence rate.
The main contributions of the paper are the following:
* We propose a way to solve the dual problem of the nested combined model of <cit.> with a universal accelerated gradient method USTM <cit.>;
* We extend the nested combined model of <cit.> to the case of capacitated networks: namely, we propose a way to solve the dual problem for searching equilibrium in combined trip distribution model with the nested choice structure and the stable dynamics traffic assignment model;
* We provide theoretical upper bounds on the complexity of searching network equilibrium by the USTM algorithm.
* We conducted numerical experiments comparing different algorithms on Moscow and Berlin transportation networks.
The paper is organized as follows. In Section <ref>, we give a general problem statement for a combined trip distribution, modal split, and traffic assignment model. In Section <ref>, we describe the primal-dual accelerated method to solve the NE problem and provide its convergence analysis.
In Sections <ref> and <ref>, we describe optimization algorithms that we consider for separate traffic assignment and trip distribution models. Section <ref> presents numerical experiments conducted on Moscow and Berlin transportation networks.
§ PROBLEM STATEMENT
We start with the description of the Beckmann and the stable dynamics models for searching the road network user equilibrium. Similarly to <cit.>, we assume the road and the transit networks are independent, and there is no congestion effects in the transit network (its travel costs are constant and defined as the costs of the shortest routes). Then, in Subsection <ref>, we describe the trip distribution model with a hierarchical choice structure of destination and travel mode (by car, public transport, or on foot). And finally, in <ref>, we consider the combined trip distribution-modal split-assignment problem and formulate its dual problem.
§.§ Route Assignment Models
Let the urban road network be represented by a directed graph 𝒢 = ( 𝒱, ℰ ), where vertices 𝒱 correspond to intersections or centroids <cit.> and edges ℰ correspond to roads, respectively.
Suppose we are given the travel demands: namely, let d_ij(veh/h) be a trip rate from origin i to destination j. We denote by P_ij the set of all simple paths from i to j. Respectively, P = ⋃_(i,j) ∈ OD P_ij is the set of all possible routes for all origin-destination pairs OD.
Agents traveling from node i to node j are distributed among paths from P_ij, i.e. for any p ∈ P_ij there is a flow x_p ∈_+ along the path p, and ∑_p ∈ P_ij x_p = d_w.
Flows from origin nodes to destination nodes create the traffic in the entire network 𝒢, which can be represented by an element of
X = X(d) = {x ∈_+^|P| : ∑_p ∈ P_ij x_p = d_ij, (i, j) ∈ OD }.
Note that the dimension of X can be extremely large: e.g. for n × n Manhattan network log |P| = Ω(n).
To describe a state of the network we do not need to know an entire vector x, but only flows on arcs:
f_e(x) = ∑_p ∈ Pδ_e p x_p for e ∈ℰ,
where δ_e p = 1{e ∈ p}. Let us introduce a matrix Θ such that Θ_e, p = δ_e p for e ∈ℰ, p ∈ P, so in vector notation we have f = Θ x. To describe an equilibrium we use both path- and link-based notations (x, t) or (f, t).
Beckmann model <cit.>. One of the key ideas behind the Beckmann model is that the cost (e.g. travel time, gas expenses, etc.) of passing a link e is the same for all agents and depends solely on the flow f_e along it. In what follows, we denote this cost for a given flow f_e by t_e = τ_e(f_e). In practice the BPR functions are usually employed <cit.>:
τ_e(f_e) = _e (1 + ρ( f_e/_e)^1/μ),
where _e are free flow times, and _e are road capacities of a given network's link e. We take these functions with parameters ρ = 0.15 and μ = 0.25.
Another essential point is a behavioral assumption on agents called the first Wardrop's principle: we suppose that each of them knows the state of the whole network and chooses a path p minimizing the total cost
T_p(t) = ∑_e ∈ p t_e.
The cost functions are supposed to be continuous, non-decreasing, and non-negative. Then (x^*, t^*), where t^* = (t_e^*)_e ∈ℰ, is an equilibrium state, i.e. it satisfies conditions
t_e^* = τ_e(f_e^*), where f^* = Θ x^*,
x^*_p_w > 0 ⟹ T_p_ij(t^*) = T_ij(t^*) = min_p ∈ P_ij T_p(t^*),
if and only if x^* is a minimum of the potential function:
Ψ(x) = ∑_e ∈ℰ∫_0^f_eτ_e (z) d z_σ_e(f_e)⟶min_f = Θ x, x ∈ X
⟺Ψ(f) = ∑_e ∈ℰσ_e (f_e)
⟶min_f = Θ x : x ∈ X, B
and t_e^* = τ_e(f_e^*) <cit.>.
Another way to find an equilibrium numerically is by solving a dual problem. We can construct it according to Theorem 4 from <cit.>, the solution of which is t^*:
Q(t) = ∑_ij∈ OD d_ij T_ij(t) - ∑_e ∈ℰσ_e^*(t_e)_h(t)⟶max_t ≥t̅, DualB
where
σ_e^*(t_e) = sup_f_e ≥ 0{t_e f_e - σ_e(f_e) }
= _e ( t_e - _e/_e ρ)^μ(t_e - t̅_e )/1 + μ
is the conjugate function of σ_e(f_e), e ∈ℰ.
When we search for the solution to this problem numerically, on every step of an applied method we can reconstruct primal variable f from the current dual variable t: f ∈∂∑_(i,j) ∈ OD d_ij T_ij(t). Then we can use the duality gap — which is always nonnegative — for the estimation of the method's accuracy:
Δ(f, t) = Ψ(f) - Q(t).
It vanishes only at the equilibrium (f^*, t^*).
Stable dynamics model.
<cit.> proposed an alternative model called the stable dynamics model, which takes an intermediate place between static and dynamic network assignment models. Namely, its equilibrium can be interpreted as the stationary regime of some dynamic process. Its key assumption is that we no longer introduce a complex dependence of the travel cost on the flow (as in the standard static models), but only pose capacity constraints, i.e. the flow value on each link imposes the feasible set of travel times
τ_e(f_e) =
_e, 0 ≤ f_e < _e,
[ _e, ∞], f_e = _e,
+∞, f_e > _e.
Unlike in the Beckmann model, there is no one-to-one correspondence between equilibrium travel times and flows on the links of the network. There are examples in <cit.> illustrating the difference. Also, one can find in <cit.> a detailed comparison of equilibria in these two models conducted for large and small networks.
Hence, an equilibrium state (x^*, t^*) of the stable dynamics model satisfies the next conditions:
t_e^* ∈τ_e(f_e^*),
x^*_p_ij > 0 ⟹ T_p_ij(t^*) = T_ij(t^*).
The above formula can be reformulated in terms of an optimization problem. The pair (f^*, t^*) is an equilibrium if and only if it is a solution of the saddle-point problem
∑_e ∈ E [t_e f_e - (t_e - _e) _e]
⟶min_f = Θ x:
x ∈ Xmax_t_e ≥_e, SaddleSD
where its primal problem is
Ψ(f) = ∑_e ∈ℰ f_e _e
⟶min_f = Θ x :
x ∈ X, f_e ≤_e,SD
and its dual problem is
Q(t) = ∑_(i,j) ∈ OD d_ij T_ij(t) - ⟨ t - , ⟩_h(t)⟶max_t_e ≥_e. DualSD
In contrast with the Beckmann model, the equilibrium state in the stable dynamics model is defined by pair (f^*, t^*) (in particular, it differs from the system optimum (f^*, ) in the model only by the time value).
In both cases the dual problem has form
Q(t) = ∑_(i,j) ∈ OD d_ij T_ij(t) - h(t) ⟶max_t ≥t̅.
The optimization problem is convex, non-smooth and composite.
§.§ Trip Distribution with Modal Split (D-MS)
Let us further assume that there are several trip purposes (demand layers), travel modes (transportation modes), and agents types. We use the logit model with calibration parameters α_am, β_am corresponding to choices of travel mode m by agents of the type a.
Further, we consider the case when α_am values are the same for all travel modes of agent type a:
α_amα_a.
Necessity of this condition will be explained below.
For example, if we want to make travelling by car (travel mode m_1) unavailable for non-car-owners a_1, we can set β_a_1 m_1inf to get zero trips d^a_1m_1=0. Thus, for every agent type a we can implicitly set its group (nest) of available travel modes.
To define destination choice model, we use the entropy-based trip distribution model of <cit.>. For every trip purpose r (e.g., home-work, home-other) we define calibration parameter γ_r. This parameter defines the sensitivity of agents with the trip purpose r to trip length.
According to <cit.>, <cit.>, we consider the following problem:
∑_i, j, r, a, m d_ij^ram T_ij^m
+ ∑_i, j, r, a1/γ_r d_ij^raln d_ij^ra + ∑_i, j, r, a, m1/α_a d_ij^ram( ln(d_ij^ram/d_ij^ra) + β_am)_H(d)→min_d ∈Π'(l, w), P1
where
Π'(l, w) = {≥ 0: ∑_j, m d_ij^ram = l_i^ra,
∑_i, a, m d_ij^ram = w_j^r},
d_ij^ram is a number of trips from zone i to j by travel mode m of agents type a with trip purpose r and d_ij^ra = ∑_m d_ij^ram; l_i^ra is a number of production from zone i of agents type a with trip purpose r; w_j^r is a number of attractions to zone j of the trip purpose r.
This is the combined trip distribution-modal split (D-MS) problem, where the choice structure is nested: travel mode choice is conditioned by destination choice (<cit.>). If γ_r and α_a are equal, then (<ref>) reduces to the problem that also corresponds to D-MS model with simultaneous choices of destination and travel mode with the same calibration parameters <cit.>.
For fixed values d_ij^ra, it is straightforward to check that the optimal d_ij^ram satisfy the following relation:
_ij^ram = / = exp(- α_a T_ij^m - β_am)/∑_m'exp(-α_a T_ij^m' - β_am'),
where T_ij^am = T_ij^m + β_am/α_a, i.e., the modal split corresponds to the logit model. Moreover,
min_(∑_i,j,r,a,m + 1/α_a∑_i,j,r,a,mln/) =
(- 1/α_aln∑_mexp( - α_a ) )_,
where is a composite travel cost for agents of type a.
Substituting = we reduce the problem (<ref>) to
E(d, T) = ∑_i,j,r,a + ∑_i,j,r,a1/γ_rln→min_d ∈Π(l, w), E
where
Π(l,w) = { d ≥ 0: ∑_j = l_i^ra,
∑_i,a = w_j^r},
and T_ij^a = - 1/α_aln∑_m exp( - α_a - β_am).
Let us derive its dual problem. In our problem statement, the system of constraints Π'(l, w) is consistent ∑_i,a l_i^r,a = ∑_j w_j^r. Therefore <cit.>, we can introduce a tautological constraint
∑_i,j,a d^ra_ij = ∑_i,a l_i^r,a = ∑_j w_j^r = N_r.
We will utilize tautological constraint (<ref>) to obtain dual function with bounded subgradient norm.
min_d ∈Π(l, w)∑_i,j,r,a + ∑_i,j,r,a1/γ_rln
=
min_≥ 0
∑_i,j,a = N_rmax_λ≥ 0∑_r 1/γ_r ∑_i,j,aln
+ ∑_i,j,a
+ ∑_i,aλ^l_rai∑_j - l_i^ra
+ ∑_jλ^w_rj∑_i,a - w^r_j
=
max_λ≥ 0∑_r
min_≥ 0
∑_i,j,a = N_r1/γ_r ∑_i,j,aln
+ ∑_i,j,a
+ ∑_i,aλ^l_rai∑_j - l_i^ra
+ ∑_jλ^w_rj∑_i,a - w^r_j
=
-min_λ≥ 0φ(λ^l, λ^w, T),
where by φ(λ^l, λ^w, T) we denoted the negative of the dual function.
Let y be the dual variable for the tautological constraint ∑_i,j,a d^ra_ij = N_r.
Then taking the gradient by d we get one of the optimality conditions for the inner minimization problem:
1/γ_rln + 1 + + λ^l_rai + λ_rj^w + y = 0,
therefore
= exp-1 - γ_r + λ^l_rai + λ_rj^w + y.
By choosing y such that satisfies ∑_i,j,a d^ra_ij = N_r we obtain
d^ra_ij(λ^l_r, λ^w_r, T)
= N_r exp-γ_r T^a_ij + λ^l_rai + λ^w_rj/∑_i,j,aexp-γ_r T^a_ij + λ^l_rai + λ^w_rj.
Substituting this into (<ref>) yields
φ(λ^l, λ^w, T)
=
∑_r N_r/γ_rln1/N_r∑_i,j,aexp-γ_r + λ^l_rai + λ^w_rj
+ ∑_i,aλ_rai^l l_i^ra + ∑_j λ_rj^w w^r_j.
§.§ Combined distribution–modal split–assignment problem (D-MS-A)
Now, we combine the road, the transit, and the pedestrian networks into one multi-modal network, which we denote again by 𝒢 = ( 𝒱, ℰ ).
Slightly abusing notations, in the same way as in Subsection <ref> we can define the set of path flows X(d) corresponding to an interzonal trip matrix d ∈Π'(l, w), and link flows f_e^ram, f_e^m = ∑_r,a f_e^ram, and f_e = ∑_m f_e^m.
According to <cit.>, the combined distribution–modal split–assignment problem can be formulated as follows:
P_3(f,d) = Ψ(f) + H(d) →min_f = Θ x, x ∈ X(d)
d ∈Π'(l,w), P3
where
Ψ(f) = ∑_e ∈ℰ(σ_e(f_e) + ∑_m ∈ V c_e^m f_e^m) .
Similarly to Subsection <ref> we obtain from (<ref>) the saddle-point problem
S_3'(d, t) = ∑_i,j,r,a,m (t) - h(t) + H(d) →min_d ∈Π'(l, w) max_t ≥, S3'
where (t) is the minimal cost of the path from i ∈ O to j ∈ D with the links cost t_e + c_e^m.
According to Subsection <ref>, the above problem reduces to
S_3(d,t) = ∑_i,j,r,a(t) + ∑_i,j,r,a1/γ_rln_ E(d, T(t)) - h(t) →min_d ∈Π(l,w)max_t ≥, S3
where (t) = - 1/α_aln(∑_m exp(- α_a (t) - β_am) ).
Respectively, the dual problem is
D_3(λ, t) = -φ(λ^l, λ^w, T(t)) - h(t)
⟶max_t ≥, λ^l, λ^w. D3
Thus, there are several ways to formulate an optimization problem. In this paper, we consider the following particular formulation of the problem and further provide convergence analysis of the accelerated gradient method application to it:
D'_3 (t) = - φ_3(t) - h(t)
⟶max_t ≥, D3'
where
φ_3(t) = min_λφ(λ, T(t)) = - min_d ∈Π(l,w) E(d, T(t)).
§ DUAL APPROACH FOR SOLVING THE COMBINED MODEL
In Subsection <ref>, we describe the universal gradient method of similar triangles (USTM) for solving the dual problem (<ref>) of the described combined model. And we provide its convergence analysis in Subsection (<ref>).
§.§ Dual method for NE problem
A popular approach for searching equilibrium in combined models is the partial linearization algorithm of <cit.> and its modifications for multi-modal and multi-user cases <cit.>. The approach is further developed in <cit.> by incorporating better line-search procedures. Note that the algorithm can be viewed as partly dual, because it is formulated in terms external to the primal problem: it includes cost matrices T_ij, which are the dual variables of the saddle-point problem (<ref>) (or (<ref>)) or the dual problem (<ref>). But still, the algorithm is essentially primal, since it optimizes (<ref>) by flow and trip distribution pair.
Here we propose an alternative approach based on solving the dual problem (<ref>) with the universal method of similar triangles (USTM), and afterwards we prove the convergence rates for it.
Algorithm <ref> provides the pseudocode of USTM with an inexact oracle and the euclidean prox-structure.
Here we used the following notations:
ϕ_0(t) = 1/2*t - t^0_2^2,
ϕ_k+1(t) = ϕ_k(t) + α_k+1[Φ̃(y^k+1) + ⟨∇̃Φ(y^k+1), t - y^k+1⟩ + h(t) ].
Note that we did not specify the stopping criterion as it can be different for different models <cit.>.
To find a network equilibrium in D-MS-A model, we apply USTM to minimize the composite objective -D'_3(t) = φ_3(t) + h(t) in (<ref>), thus we set Φ(t) φ_3(t) in Algorithm <ref>.
Recall that
φ_3(t) = - min_d ∈Π(l, w) E(d, T(t)) .
Note that at each iteration we need to compute φ_3 and ∇φ_3 with travel costs y^k+1, t^k+1.
Since this itself is done by an iterative procedure, we cannot expect to find the exact solution of the subproblem.
Instead, we use the following inexact oracle for φ_3:
φ̃_3(t) = φ_3,δ(t) = - E(d̃(t), T(t)), ∇̃φ_3(t) = ∇_δφ_3(t) = - ∇_t E(d̃(t), T(t)),
where δ≥ 0 and d̃(t) = d_δ(t) ∈Π(l, w) (see Subsection <ref>) is a δ-solution of min_d E(d, T(t)), i.e.
E(d_δ(t), T(t)) ≤min_d ∈Π(l, w) E(d, T(t)) + δ.
Recall that E(d, T(t)) is concave w.r.t. t, and its superdifferential ∂_t E(d, T(t)) is given by
∂_t E(d, T(t)) = ∑_i,j,r,a d_ij^ra∂ T_ij^a(t) .
Further, d_ij^ra∂ T^a_ij(t) = ∑_m d_ij^ram∂ T_ij^m(t), and
∂ T_ij^m(t) = ∂min_a_p^m ∈ P_ij^m⟨ t, a_pm⟩
= Conv{a_p^m ∈ P_ij^m : ⟨ t, a_pm⟩ = T_ij^m(t)} ,
where a_p^m ∈{0,1}^|ℰ| is a binary vector encoding a path p for the travel mode m. Note that several shortest paths may exist.
Finally, we get that
∂_t E(d, T(t)) = ∑_i,j,r,a,m d_ij^ramConv{a_p^m ∈ P_ij^m : ⟨ t, a_pm⟩ = T_ij^m(t)},
and any supergradient ∇_t E(d, T(t)) is a vector of link flows by shortest paths, corresponding to the trip distribution d.
Since we solve the dual problem (<ref>), we need a way to recover an approximate solution (d, f) of the primal problem (<ref>).
For any t ≥ 0, given d̃_ij^ra(t) and T_ij^m(t), define d'(t) ∈Π'(l, w) by formula (<ref>). Then we reconstruct a full correspondence matrix after K iterations of Algorithm <ref> as
d̂^K = 1/A_K∑_k=1^K α_k d'(y^k) ∈Π'(l, w).
Corresponding link flows can be recovered as <cit.>
[f̂^K]_e^m = 1/A_K∑_k=1^K α_k [f^k]_e^m .
where f^k are link flows by shortest paths for times y^k and correspondence matrix d'(y^k), such that
∑_m [f^k]_e^m = - [∇̃φ_3(y^k)]_e, e ∈ℰ.
§.§ Convergence Analysis
Below we derive some properties of the problem and then use them to apply the USTM convergence theorem, what gives us the convergence rate of our dual algorithm for searching equlibria in combined model.
The next lemma is a trivial counterpart of f. (5) in <cit.>, following from (<ref>).
For any d, d' ∈Π(l, w), t, t' ≥ 0, and supergradients ∇_t E(d, T(t)), ∇_t E(d', T(t')) it holds that *∇_t E(d, T(t)) - ∇_t E(d', T(t'))_2 ≤ M = √(2 H) N, where H ≤ |𝒱|-1 is the maximum simple path length in the network, and N = ∑_r N_r is the total number of active agents.
Typically (e.g. for a Manhattan network) H = O(√(|𝒱|)).
Recall the following standard result concerning inexact oracles.
Let for any t, t' ≥ 0
φ̃_3(t') + δ≥φ_3(t')
≥φ̃_3(t) + ⟨∇̃φ_3(t), t' - t ⟩.
Since E(d, T(t)) is concave w.r.t. t,
E(d̃(t), T(t')) ≤ E(d̃(t), T(t)) + ⟨∇_t E(d̃(t), T(t)), t' - t ⟩
= - φ̃_3(t) - ⟨∇̃φ_3(t), t' - t ⟩.
Thus,
φ_3(t') ≥ - E(d̃(t), T(t'))
≥φ̃_3(t) + ⟨∇̃φ_3(t), t' - t ⟩.
The claim follows.
The following bound is the main tool to prove the convergence rate. It is an analogue of Lemma 2 in <cit.> adapted to the case of an inexact oracle.
Assume at the k-th iteration of Algorithm <ref> we call an inexact oracle (Φ̃, ∇̃Φ) satisfying
Φ̃(t') + δ_k ≥Φ(t') ≥Φ̃(t) + ⟨∇̃Φ(t), t' - t ⟩ ∀ t, t' ∈ h,
with δ_k = α_k+1/4 A_k+1. Then for any k > 0
Φ(t^k) + h(t^k) ≤1/A_kϕ_k(u^k) + 3 /4≤1/A_kϕ_k(t) - 1/2 A_k*t - u^k_2^2 + 3 /4 ∀ t ∈ h.
Moreover,
Φ(t^k) + h(t^k) + 1/2 A_k*t^* - u^k_2^2 ≤Φ(t^*) + h(t^*) + 1/2 A_k*t^* - t^0_2^2,
where t^* = Φ(t) + h(t).
We are going to prove by induction that
A_k (Φ(t^k) + h(t^k)) ≤ϕ_k(u^k) + A_k 3 /4.
Note that since ϕ_k is 1-strongly convex,
ϕ_k(u^k) ≤ϕ_k(t) - 1/2*t - u^k_2^2 ∀ t ∈ h.
The inner stopping criterion yields that
Φ̃(t^k+1) ≤Φ̃(y^k+1) + ⟨∇̃Φ(y^k+1), t^k+1 - y^k+1⟩ + L_k+1/2*t^k+1 - y^k+1_2^2 + α_k+1/2 A_k+1
= Φ̃(y^k+1) + α_k+1/A_k+1[⟨∇̃Φ(y^k+1), u^k+1 - u^k ⟩ + /2] + 1/2 A_k+1*u^k+1 - u^k_2^2.
By the assumptions of the lemma,
Φ̃(y^k+1) ≤Φ(t^k) - ⟨∇̃Φ(y^k+1), t^k - y^k+1⟩
= Φ(t^k) + α_k+1/A_k+1⟨∇̃Φ(y^k+1), u^k - t^k⟩ ,
thus
A_k+1 Φ̃(t^k+1) ≤ A_k+1Φ̃(y^k+1) + α_k+1[⟨∇̃Φ(y^k+1), u^k+1 - u^k ⟩ + /2] + 1/2*u^k+1 - u^k_2^2
≤ A_k Φ(t^k) + α_k+1[Φ̃(y^k+1) + ⟨∇̃Φ(y^k+1), u^k+1 - u^k + A_k/A_k+1 (u^k - t^k)⟩ + /2]
+ 1/2*u^k+1 - u^k_2^2
= A_k Φ(t^k) + α_k+1[Φ̃(y^k+1) + ⟨∇̃Φ(y^k+1), u^k+1 - y^k+1⟩ + /2] + 1/2*u^k+1 - u^k_2^2 .
Now, using the convexity of h and the definition of δ_k we obtain that
A_k+1 [Φ(t^k+1) + h(t^k+1)]
≤ A_k+1[Φ̃(t^k+1) + δ_k] + A_k h(t^k) + α_k+1 h(u^k+1)
= A_k [Φ(t^k) + h(t^k)] + α_k+1[Φ̃(y^k+1) + ⟨∇̃Φ(y^k+1), u^k+1 - y^k+1⟩ + h(u^k+1) + 3 /4]
+ 1/2*u^k+1 - u^k_2^2
≤ A_k ϕ_k(u^k+1) + α_k+1[Φ̃(y^k+1) + ⟨∇̃Φ(y^k+1), u^k+1 - y^k+1⟩ + h(u^k+1)] + A_k+13 /4
= ϕ_k+1(u^k+1) + A_k+13 /4 .
The last claim of the lemma follows from the inequality
Φ(t^*) ≥Φ̃(t) + ⟨∇̃Φ(t), t^* - t ⟩ ,
what implies
ϕ_k(t^*) ≤Φ(t^*) + 1/2 A_k*t^* - t^0_2^2.
Now we are ready to prove the main result of this section: a primal-dual convergence rate for USTM in the combined model.
The complexity analysis in the next theorem is similar to Theorems 3 and 4 in <cit.>, where USTM was applied to the route assignment problem.
Assume t^0 =, L_0 ≤M^2/, and at the k-th iteration problem (<ref>) is solved with accuracy δ_k = α_k+1/4 A_k+1.
Define
R = *t^* - , R̃^2 = ρ^2 N^2 / μ∑_e ∈ E_e^2/_e^2 / μ .
Then in the case of Beckmann's model, after at most
K = 4 (M R̃/)^2
iterations it holds that
0 ≤ P_3(d̂^K, f̂^K) - D_3'(t^K) ≤.
In the case of Stable Dynamics model, after at most
K = 4 (M R/)^2
iterations it holds that
0 ≤ P_3(d̂^K, f̂^K) - D_3'(t^K) + ⟨ (f̂^K - )_+, t^* - ⟩≤,
(f̂^K - )_+_2 ≤2 /R.
By Lemma <ref>,
Φ̃(t^k+1) ≤Φ̃(y^k+1) + ⟨∇̃Φ(y^k+1), t^k+1 - y^k+1⟩ + M t^k+1 - y^k+1_2.
Then f. (A3) from the proof of Theorem 3 in <cit.> ensures that for all k
A_k ≥ k/2 M^2 .
Recall that f = f^k are link flows by shortest paths for times y = y^k and interzonal trips d' = d'(y^k).
Then, according to Subsection <ref>,
E(d̃(y), T(y)) = ∑_i, j, r, a, m d_ij^ram T_ij^m(y) + H(d')
= ∑_e,m f_e^m (y_e + c_e^m) + H(d') ,
and for any t
φ̃_3(y) + ⟨∇̃φ_3(y), t - y ⟩ = -E(d̃(y), T(y)) + ⟨∇_t E(d̃(y), T(y)), t - y ⟩
= - ∑_e,m f_e^m (y_e + c_e^m) - H(d') + ∑_e,m f_e^m (y_e - t_e)
= - ∑_e,m f_e^m (t_e + c_e^m) - H(d').
Therefore, due to the convexity of the entropy,
ϕ_K(t) = ∑_k=1^K α_k (φ̃_3(y^k) + ⟨∇̃φ_3(y^K), t - y^k ⟩ + h(t)) + 1/2*t - t^0_2^2
= - A_K ∑_e,m [f̂^K]_e^m (t_e + c_e^m) - ∑_k=1^K α_k H(d'(y^k)) + A_K h(t) + 1/2*t - _2^2
≤ A_K ∑_e(σ^*_e(t_e) - [f̂^K]_e - ∑_m [f̂^K]_e^m c_e^m) - A_K H(d̂^K) + 1/2*t - _2^2.
Then by Lemma <ref>,
-D_3'(t^K) ≤∑_e(σ^*_e(t_e) - [f̂^K]_e t_e - ∑_m [f̂^K]_e^m c_e^m) - H(d̂^K) + 1/2 A_K*t - _2^2 + 3 /4.
The rest of the proof repeats the proofs of Lemmata 1 and 2 in <cit.>, mutatis mutandis.
In the case of the Beckmann model, we substitute t_e = τ_e([f̂^K]_e), what gives us the bound
-D_3'(t^K) ≤ - ∑_e(σ_e([f̂^K]_e) + ∑_m [f̂^K]_e^m c_e^m) - H(d̂^K) + R̃^2/2 A_K + 3 /4
= - Ψ(f̂^K) - H(d̂^K) + R̃^2/2 A_K + 3 /4≤ - P_3(d̂^K, f̂^K) + (M R̃)^2/K + 3 /4.
In the case of the Stable Dynamics model,
-D_3'(t^K) ≤min_t ≥{∑_e(_e (t_e - _e) - [f̂^K]_e t_e - ∑_m [f̂^K]_e^m c_e^m) + 1/2 A_K*t - _2^2}
- H(d̂^K) + 3 /4
= - Ψ(f̂^K) - H(d̂^K) + min_t ≥(⟨ - f̂^K, t - ⟩ + 1/2 A_K*t - _2^2) + 3 /4
= - P_3(d̂^K, f̂^K) - A_K/2*(f̂^K - )_+_2^2 + 3 /4
≤ - P_3(d̂^K, f̂^K) - K /4 M^2*(f̂^K - )_+_2^2 + 3 /4.
Since optimal t^* - ≥ 0 are Lagrange multipliers for the problem (<ref>),
P_3(d̂^K, f̂^K) ≥ P_3(d^*, f^*) - ⟨ (f̂^K - )_+, t^* - ⟩
= D_3'(t^*) - ⟨ (f̂^K - )_+, t^* - ⟩ ,
and thus
- ⟨ (f̂^K - )_+, t^* - ⟩≤ P_3(d̂^K, f̂^K) - D_3'(t^K) ≤ - K /4 M^2*(f̂^K - )_+_2^2 + 3 /4.
Therefore,
K /4 M^2*(f̂^K - )_+_2^2 ≤ R *(f̂^K - )_+_2 + 3 /4
and, finally,
*(f̂^K - )_+_2 ≤4 M^2 R/K + M √(3/K) ,
what yields the result.
§ FRANK–WOLFE VARIATIONS AND USTM IN TRAFFIC ASSIGNMENT
Here, we consider several numerical methods for solving a separate problem of searching user equilibrium with inelastic demands. The Frank–Wolfe method and its variations with different line search strategies effectively solve the Beckmann traffic assignment problem, but due to its primal nature it cannot be applied to the stable dynamics model. Meanwhile, the primal-dual USTM method can be applied to both problems. Further, we conduct the experiments for these methods.
§.§ Frank–Wolfe Variations
In the Beckmann model, searching equilibria reduces to minimization of the potential function (<ref>). One of the most popular and effective approaches to solve this problem numerically is the famous Frank–Wolfe method <cit.> as well as its numerous modifications <cit.>. Also, one can apply the primal-dual subgradient methods to optimize the dual problem and then reconstruct a solution to the primal one. However, our research <cit.> showed that this approach demands more parameter adjustments to reach the Frank–Wolfe algorithm's performance with standard step size strategy.
In this paper, we test various step size strategies of Frank–Wolfe method. Namely, we consider some simple decaying step size schedules like standard choice of step size γ_k = 2/k+1 and γ_k = 1/k leading to the averaging of f^k, and a number of approaches based on a choice of the optimal step size by solving auxiliary one-dimensional problem
γ_k _γ∈ [γ_min, γ_max]Ψ((1 - γ) f^k + γ s^k).
The variety of these approaches corresponds to the different one-dimensional optimization methods. We consider the Brent method on a segment γ_k ∈ [0, 1] <cit.> and exponential decreasing of γ_k until Armijo rule is satisfied <cit.>. The last modification considered is the backtracking line-search method developed for specific use in Frank–Wolfe algorithms proposed in <cit.>.
The Frank–Wolfe method's theoretical convergence rate for convex objective (with Lipschitz-continuous gradient) is O(1/) <cit.>
§.§ Primal-dual Universal Similar Triangles Method
Let us remind that (<ref>) and (<ref>) dual problems of Beckmann and stable dynamics traffic assignment models have the same structure:
Q(t) = ∑_i,j d_ij T_ij(t) - h(t) ⟶max_t ≥t̅.
The optimization problems are convex, non-smooth and composite. We apply the USTM method to minimize the composite objective Q(t). Here, in the Algorithm <ref>, we set Φ(t) ∑_i,j d_ij T_ij(t).
As in Section <ref>, for both models, flows (primal variables) are reconstructed in the following way:
f̂^K = - 1/A_K∑_k = 1^Kα_k ∇Φ(y^k),
where α_k is a coefficient of the USTM method on iteration k, and A_K = ∑_k = 1^Kα_k. Note that any element from the set Φ(t) has form ∇Φ(t) = - f, where f = Θ x is a flow distribution on links induced by x ∈ X concentrated on the shortest paths for given times t (and vice versa: any such f corresponds to a subgradient of Φ(t)). Hence, weighted f̂^K are also induced by flows on the paths.
For the Beckmann model, we can also use the duality gap to estimate the method's accuracy:
Δ^K = Q(t^K) + Ψ(f̂^K).
For the stable dynamics model, flows reconstruction according to (<ref>) keeps feasibility of f̂^K (i.e. they are induced by flows on the paths), but can violate the networks capacity constraints — so the duality gap Δ^K can be negative. To solve the SD traffic assignment problem with inelastic demand, <cit.> proposed a novel way to reconstruct admissible flows — which also meet capacity constraints – together with a novel computable duality gap, which can be used in a stopping criterion.
The USTM method requires O(1 / ^2) iterations to obtain an -solution of primal and dual problems of Beckmann and SD models <cit.>.
§ SINKHORN'S VARIATIONS FOR TRIP DISTRIBUTION
Optimization problem of entropy-based trip distribution model of <cit.> coincides with optimal transport (OT) problem with entropy regularizer <cit.>. To solve the problem, celebrated Sinkhorn’s algorithm is used (Subsection <ref>).
In Subsections <ref> and <ref>, we consider accelerated gradient methods adapted for solving OT problems. These methods achieve better theoretical convergence rates compared to Sinkhorn-like methods in some regimes.
Later, in Subsection <ref>, we conduct experiments to compare performances of the mentioned methods.
§.§ Sinkhorn's Algorithm
In this section, for the sake of formulas simplicity, we assume a single agent type and travel mode. Since the problem (<ref>) is separable, without loss of generality, we consider only one trip purpose and suppose ∑_i,j d_ij = 1. Thus, equation (<ref>) takes form
φ(λ^l, λ^w) = 1/γln∑_i,jexp-γ T_ij + λ^l_i + λ^w_j + ∑_iλ_i^l l_i + ∑_j λ_j^w w_j.
Following <cit.>, we perform a change of variables μ^l = -γλ^l, μ^w = -γλ^w in (<ref>) and obtain an equivalent formulation
ϕ(μ^l, μ^w) = 1/γ[ln^T dμ^l,μ^w - μ^l,l - μ^w,w ] →min_μ^l, μ^w,
where
d(μ^l,μ^w)_i, j = expμ^l_i+μ^w_j-γ T_ij(t),
with the primal-dual coupling
=̣ d(μ^l, μ^w) / ^T d(μ^l, μ^w).
Similarly to the well-known Sinkhorn algorithm, the objective in (<ref>) can be alternatively minimized (see Algorithm <ref>).
Note that, according to Lemma 9 in <cit.> for the problem (<ref>), partial explicit minimization is possible via the same formulas as for classical entropy-regularized OT problem <cit.> without tautological constraint:
ψ(μ^l, μ^w) = ^T dμ^l,μ^w - μ^l, l - μ^w, w →min_μ^l, μ^w ,
but the primal-dual coupling formulas are different: (<ref>) for the problem (<ref>) and (<ref>) for the problem (<ref>).
The argminima of (<ref>)
should be implemented using numerically stable computation of the logarithm of the sum of exponents (logsumexp trick), but analytically the argminima are given by
lnμ^l_k+1lnμ^l_k+ln l-lndμ^l_k, μ^w_k,
lnμ^w_k+1lnμ^w_k+ln w-ln^T dμ^l_k, μ^w_k,
where logarithm is taken element-wisely.
The authors of <cit.> pointed out that the objective (<ref>), its gradient
∇_μ^lϕ(μ) = ∂ϕ(μ^l, μ^w) /∂μ^l = 1/γ(dμ^l, μ^w/^T dμ^l, μ^w -l),
∇_μ^wϕ(μ) = ∂ϕ(μ^l, μ^w) /∂μ^w = 1/γ(dμ^l, μ^w^T/^T dμ^l, μ^w - w),
and equation (<ref>) are invariant under transformations
μ^l→μ^l+t_μ^l
μ^w→μ^w+t_μ^w,
with t_μ^l,t_μ^w∈ℝ. That leads to better numerical stability.
In our experiments, we present a variant of Algorithm <ref> (labeled as SINKHORN-TAUT-SHIFT) with such invariant transformations, that provide maximum of the dual variables equals 1, and with numerically stable computations of the logarithm of the sum of exponents.
§.§ Accelerated Sinkhorn's Algorithm
Besides the Sinkhorn's algorithm, accelerated gradient methods are adapted for solving OT problems. These methods achieve better theoretical convergence rates compared to Sinkhorn-like methods in some regimes. To the best available knowledge, the first such method was proposed in <cit.>, where the authors proposed non-adaptive Accelerated Gradient Descent (AGD) method for a more general class of entropy-linear programming problems.
The algorithmic idea is to run AGD for solving (<ref>) and equip it with some primal updates to guarantee the convergence rate also for the primal problem.
In this subsection, Algorithm 1 and Algorithm 2 (its adaptation for trip distribution problem listed as Algorithm <ref>) from <cit.> are described. The authors proposed to replace in the classical AGD methods the gradient step with a step of explicit minimization w.r.t. one of the blocks of variables. To formalize the latter, suppose that the vector of dual variables can be divided into m block s.t. μ = (μ_1^T, …, μ_m^T)^T. So that, notations ϕ(μ) and ϕ(μ_1, …, μ_m) are equivalent. And suppose that it is possible to minimize the dual objective (<ref>) over i-th block holding the others variables fixed:
_i ϕ(μ) _ξϕ(μ_1, …, μ_i-1,ξ,μ_i+1, ⋯, μ_m).
Introduce also a notation for block gradient
∇_i ϕ(μ) = ∂ϕ(μ_1, …, μ_i-1,ξ,μ_i+1, ⋯, μ_m)/∂ξ.
The resulting algorithms m times theoretically slower than its gradient counterpart, where m is the number of blocks of variables used in alternating minimization. But in practice the algorithms work faster <cit.>.
In practice variable transformations (<ref>, <ref>) (with t_ξ = -ξ_∞, ξ∈{μ^l,μ^w}) performed after steps 5 and 7 of Algorithm <ref> and can lead to a better numerical stability when γ is big.
For Algorithm 2 from <cit.> constraints residual ((d - l)^T, (^Td -w)^T)^T_2 = O(1/k^2), but in our experiment it was observed that constraints residual decrease faster for d=d(κ_k) (<ref>) than for the theoretically obtained primal variable d using primal-dual property of the algorithm. We present experiments only on the best performing modifications with d=d(κ_k) primal variable reconstruction, labeled as NONPD (since it does not utilizes primal-dual property of the algorithms considered in this subsection).
According to <cit.>
the objective of the form (<ref>) can be minimized with the following rate
ϕ(κ_k)-ϕ^* = O(γ^1/2T_∞/k^2).
§.§ MIXED AGM
One more natural modification of Algorithm <ref> can be obtained by performing several steps of explicit minimization instead of one. The natural number of steps seems to be equal to the number of blocks m. But the proof of Algorithm <ref> utilizes the following property of a step explicit minimization
ϕ(η^k+1)≤ϕ(μ^k)-1/2L∇_i^kϕ(μ^k)_2^2
in order to obtain
ϕ(η^k+1) ≤ϕ(μ^k)-1/2nL∇ϕ(μ^k)_2^2.
The latter is true since i^k=_i∈{1, …, m}∇_i^kϕ(μ^k)_2^2.
But the inequality (<ref>) can be satisfied if one replaces lines 6 and 7 in Algorithm <ref> with the following Algorithm <ref>.
Despite the practical performance, this modification has no theoretical guarantees because ∇_i^kϕ(μ^k)_2^2 can be greater than ∑^J_j ∇ϕ(ζ^j)_2^2 for any J > m.
Moreover, Algorithm <ref> is non-increasing. But non-increasing property of the algorithm can be violated (with either the exact minimization given by lines 6 and 7 of Algorithm <ref> replaced with Algorithm <ref> or not) due to numerical instabilities. Once it happened, Algorithm <ref> is stopped. The computations can be proceeded from the last obtained η^k with Sinkhorn's iterations. In fact, these numerical instabilities break monotonicity of Sinkhorn's iterations too, but in practice the proceeding of computations with Sinkhorn's iterations allows to find better minima.
The modification, named MIXED-AGM-NONPD, combines Sinkhorn's iterations after reaching the stability limit and the exact minimization given by Algorithm <ref>.
§.§ Reconstruction of correspondence matrix
Finally, let us discuss a reconstruction of a solution to the primal problem (<ref>).
Assume we reconstruct a solution d_k of the primal problem (<ref>) by formula (<ref>) with μ^l = μ^l_k, μ^w = μ^w_k. However, since the dual problem is only approximately solved, d^k in general does not satisfy the marginal constraints. So obtain a feasible solution, one can use projection Algorithm 2 from <cit.>. According to <cit.>, it has complexity O(|O| · |D|) and returns a correspondence matrix d̂_k ∈Π(l, w) such that
d_k - d̂_k_1 ≤δ_k = d_k - l_1 + d_k^T - w_1 .
The error can be estimated following Theorem 8 in <cit.>:
E(d̂_k, T) ≤min_d ∈Π(l, w) E(d, T) + 2 δ_k T_∞ + 4 δ_k/γlog(|O| · |D|/δ_k) .
Consider μ^l_k, μ^w_k obtained with Sinkhorn's algorithm. Using (<ref>) and the convexity of ϕ we get
ϕ(μ^l_k, μ^w_k) - ϕ^* ≤⟨μ^l_k - μ^l_*, ∇_μ^lϕ(μ^l_k, μ^w_k)⟩ + ⟨μ^w_k - μ^w_*, ∇_μ^wϕ(μ^l_k, μ^w_k)⟩
= 1/γ(⟨μ^l_k - μ^l_*, d_k - l⟩ + ⟨μ^w_k - μ^w_*, d_k^T - w⟩),
where (μ^l_*, μ^w_*) is the solution of (<ref>).
Then Lemma 3 and Theorem 7 from <cit.> ensure that
ϕ(μ^l_k, μ^w_k) - ϕ(μ^l_*, μ^w_*) ≤1/2T_∞(d_k - l_1 + d_k^T - w_1).
Combining the above bounds, we obtain the following bound on the duality gap:
E(d̂_k, T) + ϕ(μ^l_k, μ^w_k) ≤5/2δ_k T_∞ + 4 δ_k/γlog(|O| · |D|/δ_k) .
§ NUMERICAL EXPERIMENTS
In our experiments, we consider the morning peak-hour in Moscow transportation network. The city's data are provided by Russian University of Transport.
The city and its suburbs are split into 1420 zones. Moscow road network consists of 12970 nodes and 36905 links, a part of it is visualized on Figure <ref>.
We model the crossings by inserting auxiliary links for each allowed turn between road links.
Resulting graph contains 63073 nodes and 94546 links.
In our four-stage model of Moscow we consider
* two demand layers: home-to-work, and home-to-others;
* two agent types: car owners and non-car-owners;
* and three travel modes: public transport, pedestrian and car.
§.§ Parallel computing
Calculation of flows f is the most expensive part, since we have to find the shortest paths for all pairs w ∈ OD. We use Dijkstra's algorithm <cit.> to find the shortest paths, which runs in O(|ℰ| + |𝒱| log |𝒱|) time; Given the shortest paths tree, flows aggregation have linear performance O(|𝒱|). Hence, the total complexity of flows calculation is O(|O| (|ℰ| + |𝒱| log |𝒱|)).
Moreover, flows reconstruction for every source o ∈ O can be computed in parallel.
Table <ref> shows the result of running 100 iterations of the Frank–Wolfe method on Moscow road network with the various number of cores involved (processor's speed is 3092,72 MHz).
§.§ Frank–Wolfe Algorithm's Variations
Each of the considered modifications of the Frank–Wolfe algorithm was run up to 2000 iterations for the traffic assignment task of the classic four-stage model for the Moscow road network. The results are shown in Figure <ref>.
§.§ Sinkhorn Algorithm's Variations
Experiments were run for the Trip distribution stage with dual function adjustment for gradient methods described in Section <ref> for the Moscow road network.
The results are shown in Figure
<ref>.
Different formulations of minimized targets for Sinkhorn's method were considered (for example, formulation (<ref>) or (<ref>)), but conceptual differences were not identified, therefore only (<ref>) formulation is shown as SINKHORN-TAUT-SHIFT. Label AAM-NONPD corresponds <cit.>, that can be easily adapted similarly as Algorithm <ref> was adapted from <cit.>. One should note that utilized Sinkhorn's variation has comparable to gradient methods convergence rate, hence common approach is suitable for solving Trip Distribution problem.
§.§ Combined Model, Beckmann
Here we compare three algorithms for finding a fixed-point of the four-stage Beckmann traffic model,
namely Four-stage procedure, Evans algorithm and our dual approach via USTM.
The difference with the two-stage model is the addition of mode split and mode cost averaging steps. Mode split step usually cause wobbling between public transport and car modes when using straightforward Four-stage procedure:
if the road network is free at the first iteration agents start alternating between these two modes at each iteration.
So we applied exponential averaging of modes cost matrices to handle this problem:
T^m_ij[k+1] = 1/2(T^m_ij[new] + T^m_ij[k]).
Figure <ref>
shows the convergence of the duality gap for all three algorithms considered.
It can be seen that the Four-stage procedure does not tend to converge to zero duality gap:
after 5-6 iterations (about 70 minutes) it reaches its lower value of the duality gap, then it starts to fluctuate around this value.
In order to increase the accuracy of the approximate solution found by Four-stage procedure, one has to increase the number of inner iterations, which will make each outer iteration slower.
In contrast, Evans method steadily converge to zero duality gap.
Some intuition about the behavior of the methods can be given by the Figure <ref>, where two-dimensional projections of d^m_ij trajectories are depicted. The projections were made by multidimensional scaling method, which tries to preserve pairwise distances while matching points from high-dimensional space (in our case — correspondence matrices) to points on the plane.
As one can see, the trajectories start from the same point, since the calculation of the correspondence matrices and the modal splitting in both methods is the same.
After a few iterations the trajectories of the methods are in the same region again, but the Evans method proceeds with small steps, while the Four-stage procedure makes long jumps around the point to which Evans method converges.
The trajectory of USTM is similar to the trajectory of the Evans algorithm and is omitted for the sake of readability.
§.§ Combined model, Stable Dynamics
Here we compare the results obtained for the Beckman and the Stable Dynamics models on the Moscow city transportation model.
We use the USTM algorithm to search for the equilibria because other algorithms are not applicable since the link travel times are not functions of the link flows in the Stable Dynamics model.
We used the same Moscow network as in previous experiment, but, since Stable Dynamics model is usually infeasible for peak-hours correspondences, we divided the peak-hour departures l_i^ra and arrivals w_j^r by two.
Convergence trajectories for Stable Dynamics model are shown in Figure <ref>.
We discuss convergence of Beckmann model in more representative case of peak-hour departures and arrivals in Subsection <ref>, therefore convergence trajectories for Beckmann model are omitted in this subsection.
We asses the convergence by monitoring two values: constraints violation and function suboptimality.
Since the dual approach allows the flows to exceed the link capacities, the primal variables stay outside of the feasible region, thus the duality gap could be negative, as shown in Figure <ref>.
Then duality gap is negative, the objective function value at that approximate solution (of the minimization problem) is less than the optimal function value, but the approximate solution is infeasible.
The comparison of the approximate solutions is given in Figure <ref>.
It is evident that Beckmann's model is more likely to exceed the link capacity.
Figure <ref> shows that the travel time on some links in Stable Dynamics model exceeds the free-flow time by several hundred times.
This implies that some zones are connected to the rest of the network only by low-capacity links, leading to huge traffic congestion at equilibrium.
This result is likely due to inaccuracies in the input data,
but if not, these bottleneck links should be prioritized in the transportation network improvement process.
§.§ Traffic Assignment Model: Frank–Wolfe vs USTM for Beckmann model
Experiments were conducted for single trip purpose, agent type and travel mode (by car) for the Berlin-Center network split into 865 zones with 12981 nodes and 28376 links (for more details see <cit.>). As it was shown in the article <cit.>, performance of the USTM method is better than UGD <cit.> and other variations of accelerated gradient descent, thus only USTM and conventional Frank–Wolfe methods are considered. Convergence by primal function and duality gap is presented in Figure <ref>. It is necessary to emphasize that the bigger ε, the faster USTM converges to ε accuracy and oscillates. Thereby, it makes sense to use restarting technique for faster convergence — run method with ε' and then with final desired accuracy ε < ε'.
§ ACKNOWLEDGEMENTS
This work was supported by the federal academic leadership program "Priority-2030" under the agreement No. 2022/pr-203 dated 08/17/2022 between MIPT and RUT (MIIT) and by the Ministry of Science and Higher Education of the Russian Federation (Goszadaniye) 075-00337-20-03, project No. 0714-2020-0005.
The work of A. Kroshnin was conducted within the framework of the HSE University Basic Research Program.
|
http://arxiv.org/abs/2307.02323v1
|
20230705142536
|
Enhanced Electron Spin Coherence in a GaAs Quantum Emitter
|
[
"Giang N. Nguyen",
"Clemens Spinnler",
"Mark R. Hogg",
"Liang Zhai",
"Alisa Javadi",
"Carolin A. Schrader",
"Marcel Erbe",
"Marcus Wyss",
"Julian Ritzmann",
"Hans-Georg Babin",
"Andreas D. Wieck",
"Arne Ludwig",
"Richard J. Warburton"
] |
quant-ph
|
[
"quant-ph",
"cond-mat.mes-hall"
] |
[email protected]
Department of Physics, University of Basel, 4056 Basel, Switzerland
Department of Physics, University of Basel, 4056 Basel, Switzerland
Department of Physics, University of Basel, 4056 Basel, Switzerland
Department of Physics, University of Basel, 4056 Basel, Switzerland
[Current address: ]School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
Department of Physics, University of Basel, 4056 Basel, Switzerland
Department of Physics, University of Basel, 4056 Basel, Switzerland
Department of Physics, University of Basel, 4056 Basel, Switzerland
Swiss Nanoscience Institute, University of Basel, 4056 Basel, Switzerland
Lehrstuhl für Angewandte Festkörperphysik, Ruhr-Universität Bochum, 44780 Bochum, Germany
Lehrstuhl für Angewandte Festkörperphysik, Ruhr-Universität Bochum, 44780 Bochum, Germany
Lehrstuhl für Angewandte Festkörperphysik, Ruhr-Universität Bochum, 44780 Bochum, Germany
Lehrstuhl für Angewandte Festkörperphysik, Ruhr-Universität Bochum, 44780 Bochum, Germany
Department of Physics, University of Basel, 4056 Basel, Switzerland
A spin-photon interface should operate with both coherent photons and a coherent spin to enable cluster-state generation and entanglement distribution.
In high-quality devices, self-assembled GaAs quantum dots are near-perfect emitters of on-demand coherent photons. However, the spin rapidly decoheres via the magnetic noise arising from the host nuclei. Here, we address this drawback by implementing an all-optical nuclear-spin cooling scheme on a GaAs quantum dot. The electron-spin coherence time increases 156-fold from T_2^* = 3.9 to 0.608. The cooling scheme depends on a non-collinear term in the hyperfine interaction. The results show that such a term is present even though the strain is low and no external stress is applied.
Our work highlights the potential of optically-active GaAs quantum dots as fast, highly coherent spin-photon interfaces.
Enhanced Electron Spin Coherence in a GaAs Quantum Emitter
Richard J. Warburton
August 1, 2023
==========================================================
§ INTRODUCTION
Photonic quantum technologies require quantum emitters capable of high-fidelity and high-rate operation. Of particular interest for quantum networks <cit.> and measurement-based quantum computing <cit.> are quantum emitters that host a spin <cit.>, allowing the creation of spin-photon entanglement.
Self-assembled semiconductor quantum dots (QDs) are potential candidates for spin-photon interfaces due to the deterministic photon emission at exceptionally high quality and rates <cit.> and the ability to load a QD with a single electron or hole <cit.>. This has led to demonstrations of spin-photon entanglement <cit.>, distant spin-spin entanglement <cit.>, and the creation of multi-photon cluster states <cit.>.
However, in these previous experiments, the short spin coherence times, T_2^*∼1-10, limited the entanglement fidelity. The short T_2^* is a consequence of magnetic noise in the host nuclear spins, coupling to the electron spin via the hyperfine interaction <cit.>.
A powerful way to mitigate the short T_2^* is to cool the nuclear spins to ultralow temperatures in order to reduce the fluctuations. The nuclei can be cooled via the electron spin itself, exploiting the hyperfine interaction <cit.>. In an optical experiment, this was originally demonstrated on an ensemble of QDs <cit.>. On single QDs, nuclear spin cooling was demonstrated on gate-defined GaAs QDs via a measure-and-correct feedback loop <cit.>. More recently, the highly inhomogeneous nuclear spins of a self-assembled InGaAs QD were cooled via an autonomous feedback <cit.>. Subsequently, a quantum sensing protocol was employed, narrowing the nuclear distribution further, thereby increasing T_2^* to 300 <cit.>. For both schemes, a non-collinear term in the hyperfine interaction is required to allow for the cooling of the nuclei. In contrast to the collinear term from the contact hyperfine interaction (∝ S_z I_z), the non-collinear term (∝ S_z I_x) arises from nuclear quadrupolar fields in strained QDs; here S_z (I_z) is the electron (nuclear) spin operator along the direction of the applied magnetic field
<cit.>.
The most studied QDs for spin-photon applications are QDs in the InGaAs/GaAs system. InGaAs QDs are self-assembled via the strain-driven Stanski-Krastanov mechanism. Self-assembled GaAs QDs in an AlGaAs matrix represent an alternative platform. The strain is low such that these QDs are self-assembled via an alternative mechanism, droplet-etching. Low-noise GaAs QDs have excellent photonic properties, all at a convenient wavelength (around 780 nm). In high-quality material, the optical linewidths are within 10% of the transform limit <cit.>. Photons emitted by remote QDs have achieved a two-photon interference visibility of 93% without spectral or temporal filtering <cit.>. The biexciton cascade generates entangled photon pairs with an extremely high entanglement concurrence <cit.>. In terms of the nuclear spins, the lack of both strain and spin-9/2 In atoms results in a homogeneous nuclear spin ensemble <cit.>, as demonstrated by the success of the Carr-Purcell-Meiboom-Gill (CPMG) decoupling scheme in prolonging the electron spin T_2 from 3.8 to 113 <cit.>. However, as for InGaAs QDs, noise in the nuclear spins limits T_2^* to values of a few-ns. To date, the possibility of feedback cooling the nuclear spins via the electron spin has remained uncertain, due to the predicted absence of the strain-generated non-collinear hyperfine interaction.
Here, we implement all-optical cooling schemes on low-noise GaAs QDs and demonstrate an increase in the electron spin coherence time from T_2^* = 3.9 to 0.608.
This is achieved with autonomous feedback and without any external perturbation (such as strain tuning).
We demonstrate spin control with , an extension of T_2 with CPMG (with a scaling of T_2^CPMG∝ N^0.69 matching previous experiments <cit.>), fast spin rotations (Rabi frequencies above 100 MHz), and high-fidelity spin control (). Our results establish GaAs QDs as an emitter of coherent photons and a host to a coherent spin.
To create the QDs, droplet-etched nanoholes in an Al_0.15Ga_0.85As matrix are filled with GaAs and capped by an Al_0.33Ga_0.67As layer. The materials are almost lattice-matched. Figure <ref>(a) shows a high-angle dark-field scanning transmission (HAADF-STEM) image of a GaAs QD <cit.>. Notable is a thin, Al-rich layer at the bottom surface of the QD <cit.>. The QD is embedded in a p-i-n diode structure (see Fig. <ref>(b)) such that the QD charge is stabilised via the Coulomb blockade. Individual QDs exhibit near-transform-limited optical linewidths <cit.>. A 3.00 magnetic field is applied perpendicular to the growth direction (Voigt geometry), at an angle of 45^∘ to the in-plane crystal axes. The electron Zeeman frequency is corresponding to a g-factor of .
The spin is manipulated by a two-colour Raman pulse detuned from the excited states by (see Fig. <ref>(c)). This pulse is created by amplitude-modulating circularly-polarised light with an electro-optic modulator driven by an arbitrary waveform generator <cit.>.
A laser resonant with the red “vertical” transition is used to read out the spin (such that the |↓⟩-state is bright, the |↑⟩-state is dark) and to prepare the spin in the |↑⟩-state via optical spin pumping <cit.>.
Driving the electron spin resonance (ESR) (Fig. <ref>(d)) shows clear Rabi oscillations between |↑⟩ and |↓⟩ with increasing drive time t.
We find an exponential decay of the oscillations with , corresponding to a quality factor of and π-pulse fidelity at .
As has been observed for InGaAs QDs <cit.>, we find a strong modulation of the quality factor <cit.> when the electron spin is driven close to the nuclear Larmor frequencies ω_n (i.e., Ω∼ω_n), a signature of an electron-nuclei interaction via a Hartmann-Hahn resonance <cit.>.
We access rotation around a second axis on the Bloch sphere by controlling the phase of the microwave signal that is imprinted on the optical field. Fig. <ref>(e) shows the sinusoidal response after two consecutive π/2-pulses on changing the phase ϕ of the second pulse, thereby demonstrating rotation around an arbitrary axis on the equator of the Bloch sphere.
On driving Rabi oscillations as a function of the detuning Δ with respect to the Zeeman frequency (), we find strong deviations from the typical chevron pattern expected for a two-level system (see Fig. <ref>(a)). In a ∼200 window around the Zeeman frequency, we find that the spin rotations lock to the probe frequency f_probe, a clear signature of electron spin–nuclear spin coupling <cit.>.
When the ESR is locked via the hyperfine interaction, cooling of the nuclei, equivalently narrowing of the nuclear distribution, is predicted <cit.>. This can be quantified by a reduction in σ_OH, the standard deviation of the ESR frequency fluctuations due to the changing Overhauser field. To probe this, we perform a free-induction decay (FID) experiment to measure the electron coherence time T_2^* in a Ramsey experiment, which acts as a gauge of the temperature of the nuclear spin ensemble (σ_OH∝ T_2^*) <cit.>.
We compare the bare T_2^* to that obtained after locking the ESR (see Fig. <ref>(b)). We observe a 20-fold increase from to 78±2 corresponding to a narrowing of σ_OH from 52±1 to 2.90±0.05 following the Rabi drive.
Remarkably, we already find an enhancement in coherence time without a dedicated cooling pulse when the Ramsey experiment is carried out with a high duty cycle: repetitive Ramsey experiments lead to a T_2^* of 7.8±0.2. To determine the bare electron coherence time, we add a 100 buffer between each cycle. This observation suggests that the repetitive application of spin manipulation pulses as short as 4 already leads to a narrowing of σ_OH.
We confirm the nuclear-spin cooling and locking of the ESR to the Rabi drive by fixing the cooling frequency f_c during Rabi cooling, subsequently detuning the probe frequency f_probe in a Ramsey experiment. Oscillations arise at the detuning frequencies Δ = f_c-f_probe as expected in a classic Ramsey experiment (see Fig. <ref> (c,d)), now with an increased coherence time.
To cool the nuclei further, we implement the recently developed quantum-sensing-based cooling scheme <cit.>. In this protocol, each cooling cycle consists of three steps (see Fig. <ref>(a, top)): (i) The electron spin is initialised and then rotated to the equator with a π/2-pulse. A period of free evolution τ_sense allows the electron to sense the Overhauser field fluctuation that leads to a detuning Δ from the target frequency f_c.
(ii) A coherent electron-nuclei flip-flop interaction arising from a non-collinear term in the hyperfine interaction is activated through ESR driving at Hartmann-Hahn resonance Ω≈ω_n. The sign of the detuning Δ determines the direction of the nuclear flops and thus leads to a reversal of the measured fluctuation. (iii) A projective measurement of the spin state transfers entropy from the nuclei and concludes one cycle of the cooling scheme. Repeating this cycle with increasing sensing time τ results in a narrower feedback function in each cycle and hence an increased sensitivity to changes in σ_OH.
We find optimal parameters for the quantum-sensing-based cooling at cycles with a linearly increasing sensing time τ_sense from τ_min = 20 to , and electron-nuclei drive time at a Rabi frequency , followed by a spin pumping pulse of 200 <cit.>. This preparation sequence takes ∼22 and is repeated before each Ramsey cycle.
The electron coherence time T_2^* increases from 3.9±0.2ns to 0.608±0.013 after application of the protocol (see Fig. <ref>(a, b)). This constitutes a 156-fold increase in T_2^*. The final T_2^* is a factor of two larger than the previous highest T_2^* reported on an electron spin hosted by an InGaAs QD (296 <cit.>) and just below the highest reported T_2^* of a single electron spin qubit in a gate-defined GaAs QD (767 <cit.>). The enhancement corresponds to a narrowing of the nuclear-spin ensemble from to 0.355±0.004 (see Fig. <ref>(b, inset)).
Using hyperfine constants A_k and abundancies η_k of the nuclei species k∈{^69Ga,^71Ga,^75As} we can estimate the number of nuclei involved and estimate the hyperfine interaction per nuclei <cit.>. This corresponds to a distribution of σ_OH/A_c≈ 376.8 macrostates in the uncooled state and 2.6 after quantum-sensing-based cooling, entering the regime where just a few nuclei excitations remain.
For both the quantum-sensing-based and Rabi cooling schemes, the Rabi frequency Ω_c is an important parameter (see Fig. <ref>(c)). The maximum performance for both cooling schemes occurs at , close to the difference frequency of ^71Ga and ^75As (). This result is in contrast to those on InGaAs QDs for which cooling was most effective at a direct Hartmann-Hahn resonance <cit.>. Generally speaking, the fact that cooling via an autonomous feedback process is effective on GaAs QDs shows that a non-collinear term in the hyperfine interaction <cit.> must be present even though the strain in the QDs is small.
Following cooling, a typical chevron pattern is observed on driving Rabi oscillations as a function of detuning with respect to the cooling frequency f_c (Fig. <ref>(d)), using here a Rabi frequency below the Hartmann-Hahn resonances. This demonstrates that in this case the electron spin is isolated from the nuclear environment and behaves as a two-level system.
In addition, the quality factor of the oscillations now increases to (corresponding to a π-pulse fidelity of 98.4±0.1) <cit.>, consistent with a reduction of hyperfine-interaction-induced Rabi decay.
Recent experiments showed that the electron spin T_2 can be increased by implementing a decoupling scheme, the CPMG protocol. As a final step, we verify that this is also possible on the QD for which nuclear spin cooling was highly effective (see Fig. <ref>(d)). By applying CPMG pulses, we extend T_2 from using a Hahn echo (N_π=1) to , an order of magnitude increase, with pulses. We extract a T_2 scaling of T_2^CPMG∝ N_π^γ with , consistent with recent results on droplet-etched QDs <cit.> and gate-defined QDs <cit.>. This result confirms that the nuclear spin ensemble is highly homogeneous. The application of more pulses is currently limited by imperfect pulse calibrations and the electron spin relaxation time <cit.>.
In conclusion, we have demonstrated fast and flexible optical control of an electron spin confined to a self-assembled GaAs QD. We show that autonomous feedback protocols to cool the nuclear spins are very effective even on an as-grown, close-to-strain-free QD. Nuclear-spin cooling leads to a 156-fold increase in the T_2^* time, . Furthermore, both T_2^* and T_2 can be extended on exactly the same QD, T_2^* by nuclear spin cooling, T_2 by dynamic decoupling. These results imply that a small non-collinear term must be present in the hyperfine Hamiltonian. Following nuclear spin cooling, T_2^* becomes much longer than both the time required to rotate the spin and the time required to generate a photon. Together with recent results on the generation of indistinguishable photons from remote GaAs QDs <cit.> performed on the same sample as used in this experiment, our results highlight the promise of GaAs QDs for a coherent spin-photon interface. Furthermore, the system represents an ideal testbed for creating non-classical collective states within the nuclear spin ensemble <cit.>.
We thank Ming-Lai Chan and Peter Lodahl at the Niels-Bohr Institute, Leon Zaporski and Mete Atatüre at the University of Cambridge, and Dorian Gangloff at the University of Oxford for stimulating discussions.
The work was supported by SNF Project 200020_204069 and Horizon 2020 FET-Open Project QLUSTER. LZ, GNN and AJ received funding from the European Union’s Horizon 2020 Research and Innovation Programme under the Marie Skłodowska-Curie grant agreement No. 721394 (4PHOTON), No. 861097 (QUDOT-TECH), and No. 840453 (HiFig), respectively. HGB, JR, ADW and AL acknowledge financial support from the grants DFH/UFA CDFA05-06, DFG TRR160, DFG project 383065199, and BMBF Q.Link.X 16KIS0867.
apsrev4-2
§ SUPPLEMENTARY INFORMATION: ENHANCED ELECTRON SPIN COHERENCE IN A GAAS QUANTUM EMITTER
§ EXPERIMENTAL SETUP
The quantum dot (QD) sample is cooled down to 4.2 in a helium bath cryostat (see Fig. <ref>). A magnetic field is applied perpendicular to the QD growth direction (Voigt geometry), at 45^∘ to the crystal axes (along the [100] or [010] direction). QD excitation, spin manipulation, and readout are performed all-optically with the use of a dark-field microscope <cit.>. QD emission is filtered from the excitation laser at frequency f_1 (Toptica DL Pro) using two polarising beam-splitters (PBS). Spin initialisation and readout are triggered using an acousto-optic modulator (AOM, Gooch&Housego 3200-124). Spin manipulation is facilitated with a second laser at frequency f_2, red detuned from f_1. The laser is amplitude-modulated with an electro-optic modulator (EOM, Jenoptik AM785) that is driven at f_AWG to generate sidebands at ± f_AWG. A second AOM (Gooch&Housego 3200-1113) is used to set the amplitude of the spin control pulses. A quarter-wave plate (QWP) sets the required circular polarisation and the pulse is sent down the axis of the cryostat to the QD with a 30:70 beam splitter. The reflected rotation laser light is filtered from the QD emission with a 25-bandwidth grating filter.
Control signals for spin initialisation, manipulation and readout are generated by an arbitrary waveform generator (AWG, Tektronix 7122C) and QD counts are detected with a superconducting nanowire single photon detector (SNSPD, Single Quantum) and processed with a time-tagger (Swabian Instruments Time Tagger Ultra). The QD sample is the same as used in Ref. <cit.> and <cit.> and growth details can be found in Ref. <cit.>.
§ QUANTUM DOT
§.§ Structural properties and composition
We perform energy-dispersive X-ray spectroscopy (EDX) on a QD from the same wafer to determine the spatial distribution of arsenic, gallium and aluminium atoms. The sample preparation for EDX/scanning transmission electron microscopy (STEM) was carried out in an FEI Helios NanoLab 650 DualBeam, a combined scanning electron microscope (SEM) and focused ion beam (FIB). A double layer of carbon is deposited to protect the QD from ion-induced damage. The first C-layer was deposited using electron-induced deposition at a beam energy of 5 and a beam current of 3.2. The second C-layer was deposited with ion-induced deposition at a beam energy of 30 and a beam current of 83. Sample cutting and polishing were carried out with the FIB at a beam energy of 30 and beam currents ranging from 240 down to 83. The sample thickness in the upper area, where the QD is located, was <50. The imaging of the TEM specimen was carried out in a JEOL JEM-F200 operated in the STEM-mode at a beam energy of 200. A high-angle dark-field scanning transmission image (HAADF-STEM) of a QD is shown in Fig. 1 of the main text.
The arsenic EDX intensity is homogeneous as expected: the arsenic concentration is constant throughout the growth of the QD layer and matrix material (see Fig. <ref>(a)). The EDX intensity for gallium atoms shows a high signal below the QD (y∼50,x∼50) and a low signal above (see Fig. <ref>(b)). This is expected as the matrix material below the QD is Al_0.15Ga_0.85As and the QD is grown on and capped with Al_0.33Ga_0.67As. Accordingly, the QD alone would be expected to have an even higher EDX intensity, as it is filled purely with GaAs. However, in the EDX experiment the gallium signal at the QD is dominated by matrix material around the QD. The EDX signal for aluminium atoms shows a low aluminium signal below the QD and a high signal above (see Fig. <ref>(c)). In addition, a thin film of high aluminium signal can be seen above the QD at y∼45nm revealing the presence of a high-aluminum content layer. This layer is formed from aluminium used in drilling the nanoholes. Interestingly, this thin, high Al-content layer also forms at the boundary of the nanohole, leading to the growth of GaAs not on Al_0.33Ga_0.67As but on a thin layer with higher Al-content. This high Al-content layer represents an increase in both alloying and strain with respect to a pure GaAs QD <cit.>.
§.§ Optical properties at zero magnetic field
The QD linewidth is measured by slowly scanning a narrow-band laser through the X^- resonance (see Fig. <ref>(a)). Low excitation power is used to avoid power broadening. A Lorentzian fit to the data gives a FWHM of 491. This should be compared to the Fourier-limited linewidth as inferred from an X^- lifetime measurement. In a pulsed experiment, the emission decay time is recorded in a histogram (see Fig. <ref>(b)). An exponential fit gives a X^- lifetime of , which corresponds to a Fourier-limited linewidth of . The linewidth is therefore 22 above the Fourier limit. This is slightly higher than the average QD-linewidth on the sample <cit.>.
§.§ Optical properties at B = 3.00 T
We record a resonance fluorescence (RF) plateau map by scanning both excitation laser frequency (f) and gate voltage (V_g) across the X^- transitions at (see Fig. <ref>(a)). A fine scan across the plateau centre (see Fig. <ref>(b)) shows a single line, while a linecut across the plateau edge (see Fig. <ref>(c)) shows four distinct peaks corresponding to the four optical transitions. From a fit to a sum of four Lorentzians, we can extract the ground-state and excited-state Zeeman splittings to be and, respectively, corresponding to g-factors () of and . (The assumption here is that g_e is negative.) Vanishing signal is expected in the plateau centre due to optical spin pumping. The single line we observe arises due to repumping as the laser drives both of the near-degenerate “diagonal” transitions. The “vertical” transitions are extinguished by spin pumping, as expected. Optical spin pumping is ineffective at the plateau edges due to cotunneling with the Fermi sea. Changing the rotation angle of the HWP in the beam path of the QD excitation (cf. Fig. <ref>), we can either address the two outer “vertical” transitions (x-polarised), the two inner “diagonal” transitions (y-polarised), or all four transitions with “diagonal” polarisation (see Fig. <ref>(d)). For the spin-manipulation experiments, we set the HWP such that only the outer “diagonal” transitions are excited (∼ 85^∘).
§ SPIN PROPERTIES
§.§ Data acquisition
For all experiments, a laser background was recorded by carrying out a measurement with the X^--transition out of resonance with the readout laser. This was carried out by changing the voltage applied to the diode. After subtracting the background signal, counts during the readout pulse were integrated with an integration time depending on the duty cycle of the experiment (typically 5-60 for a signal to noise ratio of ∼ 4). For Ramsey and Carr-Purcell-Meiboom-Gill (CPMG) experiments, two measurements were performed. In the second one, the electron is initialised in one state but subsequently projected into the opposite spin-state by adding a π-phase shift to the final π/2-pulse. This gives the top and bottom envelopes of the experiment. Additionally, it avoids a dynamic nuclear polarisation from building up <cit.>. The counts from the two envelopes (c_↓, c_↑) are used to calculate the visibility C via:
C=c_↓-c_↑/max(c_↓-c_↑)
§.§ Spin relaxation and optical spin pumping
The electron spin lifetime T_1 is measured in a pump-probe experiment (see Fig. <ref>(a)). A 200 pulse in resonance with the red “vertical” transition pumps the spin into the |↑⟩-state. The same pulse is applied after a delay τ. For short delays, no signal is expected from the second readout pulse. For longer delays spin relaxation processes flip the spin back to the |↓⟩-state leading to the reappearance of a signal. By fitting the data to , we find an electron spin lifetime of .
Capturing the time histogram of the counts during spin pumping, we find an exponential decay characterising the spin-pumping time (see Fig. <ref>(b)). The spin-pumping time decreases with increasing laser power. For all experiments in this work, the spin pumping laser-power (equivalently, readout laser-power) is set such that the spin pumping time is ∼17. An estimate of the spin-pumping fidelity is given by the residual counts (c_∞) compared to the initial counts (c_0) via <cit.>.
§.§ Rabi frequency scaling
Figure <ref>(a) shows the Rabi frequency Ω/2π of the driven electron spin as a function of laser power. Ideally, Ω/2π depends linearly on the laser power <cit.>. We find a small deviation to a linear behaviour which we attribute to a slightly nonlinear response of the AOM we used for power control. The Rabi frequency scales inversely with detuning with respect to the excited states Δ_L. Figure <ref>(b) shows Rabi oscillations for increasing Δ_L. The Rabi frequency is extracted from an exponential fit and shown in Figure <ref>(c). While Rabi frequencies of several hundred MHz are possible, the quality factor falls at the highest laser powers on account of a laser-induced spin flip <cit.>.
§.§ Laser-induced spin flip
The effect of a rotation-laser-induced spin flip can be measured by setting f_AWG off-resonance with respect to the electron spin resonance (ESR) (see Fig. <ref>, diamonds). On increasing the drive time t we see an increase in counts, a consequence of a rotation-laser-induced spin flip. This process limits the quality factor of the Rabi oscillations <cit.>. An exponential fit exp(-κ t) gives a spin-flip rate of for and for . We find a scaling factor of and 0.25, respectively. The mechanism responsible for this unwanted process is unclear.
§.§ Hartmann-Hahn resonances
The quality factor Q = 2T_2^Rabif_Rabi of the electron spin rotations shows a nontrivial dependence on the Rabi frequency f_Rabi=Ω/2π (see Fig. <ref>). Driving at Ω>2π×50, the quality factor is constant at Q ≈ 30, limited by rotation-laser-induced spin-flips. For Ω<2π×50 there is a complicated Ω-dependence. Hartmann-Hahn resonances are expected when the Rabi frequency matched the nuclei frequencies ω_n (i.e., Ω≈ω_n) <cit.>. Something akin to the Hartmann-Hahn process is revealed here as Q rises and falls in the frequency range where Ω/2π and the various values of ω_n/2π match. However, the minima in Q do not lie consistently at the exact values of ω_n/2π for the isotopes involved. Furthermore, we also find a minimum in Q close to the difference frequency ^71Ga – ^75As. Similar structure was observed for InGaAs QDs <cit.> despite the very different strain environments of the two systems and on gate-defined GaAs QDs <cit.>.
§ COOLING OF THE NUCLEAR ENSEMBLE
§.§ Rabi cooling
Figure <ref>(a) shows the nuclear cooling dependence on the duration of the Rabi cooling pulse length T_c. We find a maximum of T_2^* for T_c≈200 and a slight drop of T_2^* for longer T_c. Note that a single Rabi cooling cycle is not enough to reach – it requires the repetitive application of the Rabi cooling pulses and the Ramsey measurement. Adding a waiting time t_wait between Rabi cooling pulse and the Ramsey experiment, we find a decay of the T_2^* (see Fig. <ref>(b)). Fitting the data to a single exponential decay, we find a decay time of 39±8. In both experiments, the initialise/readout pulse is 200 in duration, the π/2-pulses are 3 in duration.
§.§ Quantum-sensing-based cooling
Figure <ref>(a) shows the pulse sequence applied to the spin control laser, the readout laser and the corresponding signal we measure on the SNSPD for the quantum-sensing-based cooling protocol.
Figure <ref>(b-e) summarises the dependencies of the quantum-sensing-based cooling on T_c, N_pulses, τ_max, and τ_min. Each of the parameters was changed individually with the other parameters kept constant. For T_c and τ_max a clear maximum could be found at T_c≈125 and τ_max≈500. Conversely, the dependence on N_pulses did not show a strong optimum, we chose to use N=40 pulses. Remarkably, with just we reach a T_2^* of 500. This suggests that also the quantum-sensing-based cooling protocol relies on the repetitive application of the cycle in Fig. <ref>(a), i.e., that full cooling is not achieved with a single cycle. τ_min is set such that the Ramsey envelope does not show oscillations. This is most clearly visible on performing a fast Fourier transform (FFT) on the Ramsey decay. For only a single peak is visible in the FFT. For side peaks appear, suggesting that τ_min is set too large such that there are multiple locking points <cit.>. We thus decided to work with .
Figure <ref>(f) shows T_2^* as a function of the wait time t_wait between quantum-sensing-based cooling and the Ramsey experiment, similar to Fig. <ref>(b). Fitting the data with an exponential decay, we find a decay time of 41±4μ. The decay time is short compared to the nuclei diffusion times extracted from similar experiments on InGaAs QDs <cit.> and nuclei diffusion times extracted from NMR experiments on droplet-etched GaAs QDs <cit.>. Our results on this GaAs QD may hint at the importance of an electron cotunneling or an electron-mediated coupling between separate nuclear spins <cit.>.
§.§ Chevron after quantum-sensing-based cooling
On measuring Rabi oscillations as a function of ESR detuning () after quantum-sensing-based cooling at we find the typical chevron pattern (see Fig. <ref>). Taking into account the effect of Gaussian-distributed Overhauser field noise with effective ESR broadening of width σ_OH, the Rabi oscillations can be modelled as:
C(t,Δ)=∫_-∞^+∞dδ1/√(2π)σ_OHexp-δ^2/2σ_OH^2·Ω^2/Ω^2+(Δ-δ_AC+δ)^2sin^2(√(Ω^2+(Δ-δ_AC+δ)^2))t/2).
Fitting the data to Eq. <ref> we find a perfect match for and a small frequency offset of δ_AC=-1.61 (see Fig. <ref>). The larger broadening with respect to the fully cooled state can be explained by the fact that the Rabi pulses have a much longer duration than the Ramsey sequence: the narrowed nuclei distribution following quantum-sensing-based cooling is disturbed by the Rabi experiment leading to an increase in σ_OH with respect to the maximally cooled case of as extracted from a FFT of the Ramsey experiment.
The frequency offset δ_AC arises as a consequence of the AC- Stark effect, which can occur if the couplings of the two transitions of the lambda system are slightly imbalanced (e.g. if the rotation laser polarisation is not perfectly circular). During cooling, the frequency is locked at f_c+ δ_AC, which leads to a small offset in detuning <cit.>. Here, cooling was performed at Ω = 2π×17, while the chevron was recorded at Ω = 2π×8.9.
Note that the experimental data were acquired in two runs, the first for Rabi oscillations up to , the second for Rabi oscillations from t=400 to 480 in order to resolve the fourth oscillation. This is the origin of the deviation between the model and experiment for : there is a slight change in power and cooling performance between the two runs (see Fig. <ref>(b)).
§.§ Rabi decay after quantum-sensing-based cooling
The decay envelope of the Rabi oscillations can take different forms <cit.>. We fit the data to exp[(-t/T_2^Rabi)^α] with α a free fit parameter. However, it is difficult to distinguish unambiguously between a Gaussian () and an exponential () decay: the residuals of the Gaussian and exponential fits are very similar. We decided to use the exponential fit to make a comparison to previous experiments <cit.>. Comparing Rabi oscillations with and without quantum-sensing-based cooling (Fig. <ref>(a,b)) we find an increase in the quality factor from to 30.3 consistent with a reduction of hyperfine-interaction-related decay.
§ DECOUPLING SEQUENCES
The CPMG pulse sequence potentially allows the electron spin to be decoupled from nuclear noise, enhancing the spin T_2 time <cit.>. The CPMG sequence is shown in Fig. <ref>(a): after a π/2_x-pulse and τ/2 delay, N_π π_y-refocusing pulses separated by τ were applied followed by another τ/2 delay and π/2_x-pulse. We fit the experimental data (see Fig. <ref>) with stretched exponentials exp[(-t/T_2^CPMG)^α] and plot T_2^CPMG against the number of applied refocusing pulses N_π, where . Using a power-law fit, we find a T_2 time scaling of N_π^γ with (see Fig. <ref>(b,c)). The power-law fit is insensitive to the exponent α of the stretched exponential and gives insight into the noise sources in the environment <cit.>. is similar to the value obtained in Ref. <cit.> () for which up to 81 decoupling pulses were applied to droplet-etched GaAs QDs. Ref. <cit.>
reports on gate-defined GaAs QDs for and a crossover to for large N_π. On increasing N_π, no change of γ was observed in the experiments presented here.
The application of more refocusing pulses was not possible due to a reduction in readout contrast as more pulses were applied. This is due to imperfect pulse calibrations and the limit imposed by the electron-spin relaxation time T_1. These problems can be mitigated with more precise calibration and with a new sample with a larger tunnel barrier to the back contact, respectively.
*
|
http://arxiv.org/abs/2307.03382v1
|
20230707044356
|
Rationality and Behavior Feedback in a Model of Vehicle-to-Vehicle Communication
|
[
"Brendan Gould",
"Philip Brown"
] |
cs.GT
|
[
"cs.GT"
] |
Atomic screening and e^+e^- pair photoproduction at low energies.
A.I. Milstein
Abstract
0.9
We review the modular flavor symmetric models of quarks and leptons
focusing on our works.
We present some flavor models of quarks and leptons
by using finite modular groups and discuss the phenomenological implications.
The modular flavor symmetry gives interesting phenomena at the fixed point of
modulus. As a representative, we show the successful texture structure at the fixed point τ = ω.
We also study CP violation, which occurs through the modulus stabilization.
Finally,
we study SMEFT with modular flavor symmetry by including higher dimensional operators.
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
empty
empty
Vehicle-to-Vehicle (V2V) communication is intended to improve road safety through distributed information sharing; however, this type of system faces a design challenge: it is difficult to predict and optimize how human agents will respond to the introduction of this information.
Bayesian games are a standard approach for modeling such scenarios; in a Bayesian game, agents probabilistically adopt various types on the basis of a fixed, known distribution.
Agents in such models ostensibly perform Bayesian inference, which may not be a reasonable cognitive demand for most humans.
To complicate matters, the information provided to agents is often implicitly dependent on agent behavior, meaning that the distribution of agent types is a function of the behavior of agents (i.e., the type distribution is endogenous).
In this paper, we study an existing model of V2V communication, but relax it along two dimensions: first, we pose a behavior model which does not require human agents to perform Bayesian inference; second, we pose an equilibrium model which avoids the challenging endogenous recursion.
Surprisingly, we show that the simplified non-Bayesian behavior model yields the exact same equilibrium behavior as the original Bayesian model, which may lend credibility to Bayesian models.
However, we also show that the original endogenous equilibrium model is strictly necessary to obtain certain informational paradoxes; these paradoxes do not appear in the simpler exogenous model.
This suggests that standard Bayesian game models with fixed type distributions are not sufficient to express certain important phenomena.
§ INTRODUCTION
As technology becomes more omnipresent in today's society, technological solutions are being developed for a broad range of applications.
These applications increasingly include areas that have complex interactions with human society, such as the Internet of Things (IoT) and smart infrastructure concepts like vehicle-to-vehicle (V2V) communication.
These interactions present a unique challenge to engineers, as prior work has shown that naively implemented solutions, even those that intuitively seem helpful, can unintentionally worsen the problems they were designed to solve <cit.>.
In particular, we consider the context of a traffic congestion game, where it is commonly known that selfish individual behavior is not socially optimal <cit.>.
Prior work has considered various mechanisms to influence agents to choose socially optimal behaviors, such as financially incentivizing desired behaviors <cit.>.
Bayesian persuasion attempts to influence agents through information design, by strategically revealing or concealing information to change the posterior beliefs of these agents <cit.>.
For example, one goal of V2V technology is to improve driver safety by broadcasting warning signals when a road hazard is encountered.
When a driver receives such a warning (even if it is known to occasionally be incorrect), they have a higher degree of belief that the road is unsafe, and are therefore more likely to drive carefully.
However, there are significant limitations to information design.
First, Bayesian persuasion places stringent rationality assumptions on human agents.
Human decision making is not consistently utility-maximizing <cit.>, and can be affected by personal biases <cit.> or even by different representations of equivalent information <cit.>.
Furthermore, formal statistical statements, and in particular Bayes' Theorem, are often misunderstood by humans, even those in highly educated positions such as doctors <cit.>.
Assuming that human agents can quickly and accurately perform this calculation is likely unrealistic.
Second, it is often non-trivial to design the information sharing policy.
Prior work has shown that full information sharing is not always optimal, and may even be worse than no information sharing <cit.>.
A compounding factor in conducting this analysis is model complexity: in many applications, the information to be shared is implicitly a function of agent behavior; this adds additional complexity to the optimization problem.
Our work is designed to address both of these issues, using the context of a congestion game where V2V cars are able to share information about accidents, previously introduced in <cit.>.
We begin by posing a novel model of agent decision-making which does not require agents to perform Bayesian inference.
Surprisingly, we show that in this non-Bayesian model, equilibrium behavior exactly matches that of the original Bayesian model.
This transformation allows us to relax the rationality expectations on human drivers, potentially improving the real-world descriptive power of the model.
In addition, it allows us to re-frame the original model of a Bayesian game of incomplete information to a non-Bayesian game of imperfect information, potentially opening new avenues for analysis.
This is reminiscent of Harsanyi's classic work <cit.>, but we believe that our characterization is not a direct consequence of this prior work.
In particular, our setting has non-atomic agents, and we allow the distribution of agent types to vary endogenously, neither of which is considered in <cit.>.
Next, we investigate the relationship between model complexity and expressiveness by considering two classes of models.
The simpler approach assumes that the probability of an accident (and thus the distribution of which agents receive which types of signals) is a constant model parameter, unaffected by an emergent behavior (i.e. it is exogenous).
Note that this is the standard approach in information design problems in the literature; road hazards or highway delay characteristics are almost universally assumed to be drawn from some fixed, known distribution <cit.>.
However, one would intuitively expect the probability of an accident to depend on the behavior of drivers, where more reckless behaviors make accidents more likely.
Therefore, <cit.> considers models with an endogenous accident probability, expressing it as a function of equilibrium behavior.
This creates a complex recursive relationship where accident probability varies in response to driver behavior, which varies in response to accident probability.
In the present paper, we show that even though the simpler exogenous models are easier to analyze, they are qualitatively different and cannot describe the same phenomena as the endogenous models.
In particular, when the accident probability is endogenous, a paradox can occur in which social cost increases with information sharing.
On the other hand, this paradox never occurs in the simpler (more standard) exogenous model.
This suggests that in some circumstances, the current popular modeling framework of Bayesian games with fixed agent type distributions may be insufficient to express important phenomena.
§ MODEL
§.§ General Setup
Our model consists of a non-atomic, unit mass of agents (drivers) interacting on a single road.
On this road, traffic accidents either occur () or do not occur ().
Throughout, we use ℙ(E) to represent the probability of event E.
Drivers are able to choose between the actions of driving carefully () or recklessly ().
Intuitively, careful drivers consistently choose slower, safer driving behaviors such as signaling before changing lanes, while reckless drivers choose faster, riskier behaviors.
Reckless drivers become involved in existing accidents and experience an expected cost of r>1; however, careful drivers regret their caution if an accident is not present and experience a regret cost of 1.
These costs are collected in this matrix:
22
Accident () No Accident ()
Careful () 0 1
Reckless () r 0
Each driver has type τ∈, and a set of strategies .
A strategy s ∈ for a driver is a procedure to choose which action to play as a function of the information known to the agent.
Let represent the mass of drivers of type τ choosing strategy s ∈, and all agents' behavior is described by the tuple x = ()_τ∈, s ∈.
We use for the total mass of drivers of type τ, so that ∑_s ∈ =.
A fraction y of agents drive cars equipped with V2V communication technology.
Cars with this technology can autonomously detect accidents, and broadcast signals about them.
If an accident occurs, a “true positive” signal is broadcast with probability t(y); otherwise, a “false positive” signal is broadcast with probability f(y) < t(y).
Any signals that are broadcast are received by all V2V drivers.
Counter-intuitively, sharing perfect information about the environment can make parts or all of the population worse off.
Therefore, it may sometimes be optimal for administrators of V2V technology to withhold information from drivers; <cit.> showed that this is the case for a specific instantiation of a much more general class of models considered in this work.
Accordingly, we introduce the parameter β to describe the information quality of V2V technology.
In the event that a warning signal is broadcast (), a V2V car may not always display a warning signal () to its driver; it will do so with probability β = ℙ( | ) ∈ [0, 1].
Therefore, we have that
ℙ() = β(ℙ()t(y) + (1-ℙ())f(y)).
The system planner performs information design on the value of β to minimize accident probability and social cost.
An attractively simple approach to analysis is to let the probability of an accident be a constant that is unaffected by social behavior; this approach has been used previously in the literature <cit.>.
We call this case an exogenous accident probability, and refer to it as simply ℙ() = ∈ [0, 1].
However, this assumption seems inaccurate to the real world, since we expect that reckless driving habits would lead to more frequent accidents.
Therefore, we wish to allow accident probability to be endogenously affected by driver behavior.
To that end, we write d ∈ [0, 1] to denote the overall fraction of drivers choosing to drive recklessly, and p(d) for the resulting probability that an accident occurs.
We assume that p is a continuous, strictly increasing function, so that more reckless drivers cause accidents to be more likely.
Intuitively, strategies in which drivers choose to drive recklessly more often should cause accidents to become more likely; to measure this, let ρ(τ, s, P) be the probability that a driver of type τ∈ choosing strategy s ∈ is reckless when the probability of an accident is P.
Then, the mass of reckless drivers of type τ choosing strategy s when accident probability is P is ρ(τ, s, P).
Then, the endogenous accident probability resulting from a behavior tuple x for driver types is described implicitly as a solution to the recursive relationship:
(x) = p(∑_τ∈∑_s ∈ρ(τ, s, (x)) ).
It was shown in <cit.> this this relationship always has such a solution.
We consider two classes of games, distinguished by either exogenous or endogenous accident probabilities.
We write the former as the tuple G̅ =(β, y, r, ), and the latter as G = (β, y, r).
For both types of games, equilibrium conditions come from the standard Nash idea: a behavior tuple x is an equilibrium of G if for any type τ and any strategy s ∈,
> 0 (s; x) = min_s ∈(s; x).
That is, if any driver is choosing a strategy, its cost to them is minimal.
Finally, we define social cost as the expected cost incurred by the entire population:
(x) = ∑_τ∈∑_s ∈(s, x).
We abuse notation and write (G) to mean (x) and (G) (or (G̅)) to mean (x) where x is an equilibrium of the game G (or G̅).
[It can be shown that any two equilibria of the same game have equal accident probability and social cost.
An intuitive explanation for this is: every game has a unique equilibrium unless at least one type of drivers is indifferent between multiple strategies.
If this is the case, then these strategies have equal cost, meaning the relative frequencies of drivers choosing each strategy are in some sense irrelevant.
See the proof of Lemma <ref> for a formal treatment of this idea in exogenous games, and Lemma <ref> for the same in endogenous games.
]
§.§ Driver Decision Models
We consider two interpretations for the effect of V2V technology on the behavior of V2V drivers.
§.§.§ Bayesian Agents
The first is a more standard approach for a Bayesian game, and is studied in Section <ref> (additionally, <cit.> analyzes a specific case of this approach in great detail).
There are three types of agents: non-V2V drivers, V2V drivers who receive a signal, and V2V drivers who do not receive a signal.
Non-V2V drivers must use the prior probability of an accident to calculate their expected costs, but we allow V2V drivers to use the posterior probability after the signal realization.
We call this the Bayesian set of types, and write it = {, , } for non-V2V, unsignaled V2V, and signaled V2V drivers respectively.
It can be quickly seen that the mass of drivers in each group is = 1-y, = ℙ()y, and = ℙ()y.
Each type has the strategies = = = {, }, with the accompanying cost functions:
(s;x) =
1-ℙ() if s=,
rℙ() if s=,
(s;x) =
1 - ℙ( | ) if s=,
r ℙ( | ) if s=,
(s;x) =
1 - ℙ( | ) if s=,
r ℙ( | ) if s=.
§.§.§ Non-Bayesian Agents
Alternatively, we consider a decision model where V2V drivers do not perform a Bayesian update (studied in Section <ref>).
In this case, the only driver types are non-V2V and V2V, written = {, }̌ with = 1-y and = y.
Instead of calculating a posterior, V2V drivers choose between trusting () the signal completely (assuming that the presence of a signal implies the existence of an accident, and vice versa), or ignoring it completely, using the prior probability to calculate an “assumed” cost of driving carefully () or recklessly ().
Drivers have strategies = {, } and = {, , }.
The cost functions for these strategies are:
(s; x) =
1 - ℙ() if s=,
rℙ() if s=.
(s; x) = ℙ(∩) + rℙ(∩) if s = ,
1 - ℙ() if s=,
rℙ() if s=.
The timeline of games with non-Bayesian agents is shown in Figure <ref>.
Note the differences between this timeline and that of in <cit.> (describing the behavior of Bayesian agents).
Non-Bayesian V2V agents must commit to a strategy before the signal realization, and can only use the information provided by the signal if they commit to trusting it completely.
§ MAIN RESULTS
Our first result is that Bayesian games of incomplete information can be reinterpreted as imperfect information games, similar to the transformation in <cit.>.
The benefit of doing so is that it allows us to remove the heavy cognitive burden of Bayes' Theorem from agent calculations, producing a more credible model of human behavior.
Let G = (β, y, r) be a game with endogenous accident probability, and G̅ = (β, y, r, ) be an exogenous game.
Then,
(G̅) = (G̅),
and
(G) = (G),
(G) = (G).
Equation (<ref>), concerning the social cost of games with exogenous accident probability, is proven by Lemma <ref> in section <ref>.
Equations (<ref>) and (<ref>), analogous statements for games with endogenous accident probability, are proven by Lemma <ref> and Lemma <ref> respectively in section <ref>.
This completes the proof.
Intuitively, Theorem <ref> shows that allowing drivers to use Bayesian inference in their decisions does not gain any significant model expressiveness.
The same outcomes and paradoxes can be captured by a decision model where drivers choose between complete trust and complete ignorance of the signal's information.
This gives us freedom to analyze any game assuming either Bayesian or non-Bayesian drivers, whichever is more convenient.
Additionally, since the social costs induced by both models are equivalent, we now write simply (G) for the social cost of the game G, where G can be interpreted as either a Bayesian or non-Bayesian game.
Our second result concerns the modeling of accidents.
It seems reasonable to expect (and is indeed true) that using an endogenous accident probability significantly complicates model analysis.
Because of this, a natural question is to ask whether the same characteristics can be captured by a model using exogenous accident probability.
However, our next result shows that this is not the case.
It was shown in <cit.> that for certain games with endogenous accident probability, equilibrium social cost can paradoxically be increasing with information quality.
By contrast, no such social cost paradox is possible when using an exogenous accident probability.
Fix y∈ [0, 1], r > 1, and ∈ [p(0), p(1)].
Consider β_1, β_2 ∈ [0, 1] with β_1 < β_2.
Let G̅_1 = (β_1, y, r, ) and G̅_2 = (β_2, y, r, ) be signaling games with exogenous accident probability.
Then, it is always true that
(G̅_1) ≥(G̅_2).
Similarly, Let G_1 = (β_1, y, r) and G_2 = (β_2, y, r) be signaling games with endogenous accident probability.
Counter-intuitively, there exist parameter combinations where
(G_1) < (G_2).
We first consider games between Bayesian drivers.
In this case, Lemma <ref> proves (<ref>), and Lemma <ref> shows that (<ref>) can be satisfied.
Then, by Theorem <ref>, these results also hold for games between non-Bayesian drivers.
This completes the proof.
§ BAYESIAN AGENTS
We first consider a scenario where V2V drivers perform Bayesian updates on the probability of an accident using the warning signal realization.
Analysis proceeds as follows: Lemma <ref> establishes necessary conditions of any equilibrium (with either exogenous or endogenous accident probability) by describing behavior when drivers have a strict preference for one strategy.
This result is sufficient to show that social cost is monotonically decreasing with information quality for games with exogenous accident probability, which we do in section <ref> through Lemma <ref>.
Finally, in section <ref>, Lemma <ref> gives the counter-intuitive result that social cost may sometimes be increasing with information quality, but only if accident probability is endogenous.
§.§ Necessary Equilibrium Conditions
We divide V2V drivers into two groups by whether or not they have seen a warning signal.
This implies the set of driver types = {, , }.
Using the calculated posterior probability, V2V drivers choose between the pure strategies of always driving carefully or always driving recklessly.
The following thresholds are useful in our analysis; we define the following shorthand to reference them:
:= f(y)/rt(y)+f(y),
:= 1/1+r,
:= 1-β f(y)/1 + r(1-β t(y)) -β f(y),
where it holds that:
< ≤.
Note that (by Bayes' Theorem and (<ref>)-(<ref>)):
(; x)
0.5[ <; =; > ](; x)
ℙ()
0.5[ <; =; > ],
(; x)
0.5[ <; =; > ](; x)
ℙ()
0.5[ <; =; > ]1.0
We use this notation to mean that any of the relationships between the first expressions is equivalent to the corresponding relationship between the later expressions.
Equality and both inequalities are preserved.
Using (<ref>)-(<ref>), we are now equipped to state the necessary conditions of any equilibrium:
For any equilibrium x of a game with either endogenous or exogenous accident probability,
^ =
0 if ℙ () > ,
1-y if ℙ () < ,
^ =
0 if ℙ () > ,
ℙ()y if ℙ () < ,
^ =
0 if ℙ () > ,
ℙ()y if ℙ () < .
Each of these implications follows immediately from (<ref>), (<ref>)-(<ref>), and (<ref>)-(<ref>).
Assume by way of contradiction that ℙ() >, but ^ > 0.
By (<ref>), ℙ( | ) > 1/1+r.
Then, (<ref>) gives that
Jvu(, x) = r ℙ( | ) > 1 - ℙ( | ) = (, x).
But this clearly contradicts (<ref>), since ^ > 0.
A proof of the remaining cases is accomplished in a similar manner.
§.§ Exogenous Crash Probability
We first assume that accident probability is defined as an exogenous constant .
Since accident probability is constant, it is very straightforward to compute social cost at equilibrium using (<ref>).
We claim that this social cost is non-increasing with information quality β.
Let β_1, β_2 ∈ [0, 1] with β_1 < β_2.
Consider games G̅_1 = (β_1, y, r, ) and G̅_2 = (β_2, y, r, ) between Bayesian agents with exogenous accident probability .
Then, (G̅_1) ≥(G̅_2).
Let and represent the threshold for β_1 and β_2, respectively, and note that < since f(y) < t(y).
First, assume that > >.
By (<ref>), > >.
By Lemma <ref>, we then have that ^ = 0, ^ = 0, and ^ = 0 for any equilibrium of G̅_1 or G̅_2.
Thus, (<ref>)-(<ref>) substitute into (<ref>) to give
(G̅_1) = 1-≥(G̅_2),
and we are finished in this case.
Now, assume that = (implying >).
By (<ref>), < <, so by Lemma <ref>, ^ = 0 and ^ = 0 in any equilibrium of G̅_1 or G̅_2.
Let x_1 be an equilibrium of G̅_1 and x_2 one of G̅_2.
By the above, (G̅_1) = 1-.
Since =, (<ref>) and simple algebra give that (, x_2) = (, x_2).
Thus, ∑_s ∈(s, x_2) = (, x)ℙ()y.
Then, (<ref>) simplifies to
(G̅_1) = 1 - ≥ 1 - = (G̅_2),
and we are again finished.
A similar technique is used when = <, < <, =, < <, =, or <, completing the proof.
Lemma <ref> shows that providing information of a higher quality will never increase social cost when accident probability is exogenous, which is perhaps the expected result.
However, this result no longer holds in a model with an endogenous accident probability.
§.§ Endogenous Crash Probability
We now consider a model of Bayesian drivers with an endogenous accident probability.
Recall that agents have one of three types: non-V2V drivers, unsignaled V2V drivers, and signaled V2V drivers.
Drivers of each type τ∈ have access only to the pure strategies of always driving carefully or always driving recklessly, where ρ(τ, , (x)) = 1 and ρ(τ, , (x)) = 0.
Therefore, (<ref>) simplifies to
ℙ() = (x) = p(^ + ^ + ^).
In this case, the model is identical to the one described in <cit.>.
Since it was thoroughly studied in that work, we simply reference the previous result.
There exist games between Bayesian drivers with endogenous accident probability such that social cost at equilibrium is increasing with β.
§ NON-BAYESIAN AGENTS
We now discuss games with agents who do not perform explicit Bayesian updates.
Analysis generally follows the same path as section <ref>, with some additional components.
Lemma <ref> describes the behavior of agents who have a strict preference for one strategy, giving necessary conditions for an equilibrium of any game between non-Bayesian agents, whether accident probability is exogenous or endogenous.
We then focus on games with exogenous accident probability.
Lemma <ref> shows that for such games, social cost is decreasing with information quality.
Then, Lemma <ref> compares the consequences of Bayesian and non-Bayesian agents in these games, and shows that equilibrium social cost is the same regardless of which agent behaviors are used.
Finally, we consider games with endogenous accident probability.
Lemmas <ref> and <ref> characterize equilibria of these games; the former describes how driver behavior is determined by accident probability, and the latter gives a relationship between game parameters and accident probability.
With this, driver behavior and equilibrium accident probability can be determined exclusively from game parameters.
Finally, Lemmas <ref> and <ref> compare the accident probability and social cost induced by Bayesian agents to those of non-Bayesian agents, and find that there is no difference. =-1
§.§ Necessary Equilibrium Conditions
To model the behavior of non-Bayesian agents, we use the alternate set of driver types = {, }̌, representing non-V2V drivers and V2V drivers, respectively.
V2V agents are still able to condition their behavior on the realization of a warning light, but must either completely trust or ignore the signal, and use an “assumed” cost equal to that of non-V2V drivers for decision making when ignoring the signal.
Again, we use thresholds to describe behavior.
By (<ref>),
(; x)
0.5[ <; =; > ](; x) ℙ()
0.5[ <; =; > ],
(; x)
0.5[ <; =; > ](; x) ℙ()
0.5[ <; =; > ]1.0
Crucially, Bayes' Theorem was never necessary for these calculations, yet they give the same behavior thresholds as (<ref>) and (<ref>) in Section <ref>.
This gives a very compelling argument for our claim that Bayesian and non-Bayesian behaviors are equivalent.
Once it is known that both models are in some sense making the same decisions, it is unsurprising that the resulting equilibria are identical.
We now make an analogous statement to Lemma <ref> using (<ref>) and (<ref>).
For any equilibrium of x of a game,
^ =
0 if ℙ () > ,
1-y if ℙ () < ,
^ =
0 if ℙ () > ,
y if ℙ () < ,
^ =
0 if ℙ () < ,
y if < ℙ () < ,
0 if ℙ () >
This follows directly from (<ref>), (<ref>)-(<ref>), and (<ref>)-(<ref>).
Assume by way of contradiction that < ℙ() <, but ^ < y.
By (<ref>), (; x) < (; x), and by (<ref>), (; x) < (; x).
Therefore, by (<ref>), ^ = ^ = 0.
But since = y, this implies ^ = y, a contradiction.
A proof of the remaining cases can be accomplished in a similar manner.
§.§ Exogenous Crash Probability
As before, we begin analysis by assuming an exogenous, constant accident probability .
Again, we see that models making this assumption are qualitatively different from those allowing for an interdependence between driver behavior and accident probability.
Let β_1, β_2 ∈ [0, 1] with β_1 < β_2.
Consider games G̅_1 = (β_1, y, r, ) and G̅_2 = (β_2, y, r, ) between Bayesian agents with exogenous accident probability .
Then, (G̅_1) ≥(G̅_2).
This can be shown using a technique identical to that of the proof of Lemma <ref>.
That is, social cost is again non-increasing with information quality β for these types of games.
Furthermore, we show in Lemma <ref> that social cost is unaffected by the change in the decision model of V2V drivers:
For any game G̅ = (β, y, r, ) with exogenous accident probability ,
(G̅) = (G̅).
We again prove this in cases.
First, assume that < <.
By Lemma <ref>, ^ = 0, ^ = ℙ()y, and ^ = 0.
Similarly by Lemma <ref>, ^ = 0, ^ = 0, and ^ = y (note that the value of ^ is unchanged regardless of which of the two Lemmas is used to derive it).
Then, (<ref>) simplifies to give
(G̅) = (G̅) =
(1-)(1-y) + r (1-β t(y)) y + ((1-)β f(y))y,
completing the proof in this case.
The same technique gives the desired result if <, < <, or <.
Now, consider the case where =.
By Lemma <ref>, ^ = 0, and by Lemma <ref>, ^ = 0.
By (<ref>) (or equivalently (<ref>)), (; x) = (; x), so ∑_s ∈(s, x) = (, x)(1-y).
If P̅ =, then a very similar argument implies that ∑_s ∈(s, x) = (, x)(1-ℙ())y, and ∑_s ∈(s, x) = (, x)y.
Otherwise, by (<ref>), < <, meaning ^ = (1-ℙ())y by Lemma <ref> and ^ = y by Lemma <ref>.
In any case, (<ref>) again simplifies to
(G̅) = (G̅) =
(1-)(1-y) + r (1-β t(y)) y + ((1-)β f(y))y,
which is the desired result.
The above technique also suffices in the cases where = and =, completing the proof.
§.§ Endogenous Crash Probability
Finally, we introduce an endogenous accident probability to the non-Bayesian decision model.
Both non-V2V and V2V drivers have access to the pure strategies of always being careful and always being reckless (using the prior probability of an accident to compute expected cost), but V2V drivers additionally have the option to fully “trust” the signal, believing the presence of a warning light implies the existence of an accident and the inverse.
This creates the strategy spaces = {, } and = {, , }, where ρ(τ, , (x)) = 1 and ρ(τ, , (x)) = 0 for each τ∈, and ρ(, , (x)) = ℙ().
Then, (<ref>) simplifies to
ℙ() = (x) = p(^ + ^ + ℙ()^).
Recall that Lemma <ref> specifies the equilibrium behavior when agents have a strict preference for one strategy.
We now provide a description of this behavior when agents are indifferent between two strategies.
For any game G = (β, y, r), a non-Bayesian behavior tuple x is a signaling equilibrium if it satisfies Lemma <ref> and the following hold:
ℙ() = ^ = p^-1() - ℙ()y
ℙ() = ^ = y
ℙ() = ^ = 0
ℙ() = ^ = p^-1()/ℙ()
Furthermore, the behavior tuple x satisfying the above conditions is an essentially unique equilibrium for G, meaning that any equilibrium x' must satisfy the above conditions or have (x') = (x).
First, we shall show that x is an equilibrium.
To that end, consider the case where (G) >.
By basic algebra, 1-ℙ() < rℙ(), and by (<ref>), (, x) < (; x).
Then, note that ^ = 0 (respectively ^ = 1-y) satisfies (<ref>).
An identical technique suffices in the cases where (G) = or (G) <.
Using (<ref>) in the same manner extends this result to the behavior of of V2V drivers.
Therefore, x is an equilibrium for G.
It remains to show that any other equilibrium x' is essentially identical to x.
Note that unless (x') =, (x') =, or (x') =, x' satisfies the above conditions by Lemma <ref> and we are finished.
But clearly all equilibria with (x) = have equivalent accident probability (and similarly for and ), so we are finished.
Using this result, we partition our parameter space.
Each of the following sets corresponds to a particular “family” of equilibria, restricting the accident probability of games in that set.
We define these families in equations (<ref>)-(<ref>).
These partitions allow us to describe equilibrium accident probability with a finer granularity.
(This is proved using a technique identical to the one used for Lemma 4.1 in <cit.>.)
For any non-Bayesian game G = (β, y, r), G ∈∪_i =1^7 E_i, and
G ∈ E_1 (G) = p(0)
G ∈ E_2 (G) =
G ∈ E_3 < (G) <
G ∈ E_4 (G) =
G ∈ E_5 < (G) <
G ∈ E_6 (G) =
G ∈ E_7 (G) = p(1)
Consider any game G = (β, y, r).
If G ∉E_1 and G ∉E_2, then we must have that
p((1 - β (t(y)-f(y)) - β f(y))y) < .
Similarly, if G ∉E_7 and G ∉E_6, then we must have that
p((1 - β (t(y)-f(y)) - β f(y))y) < .
Therefore, if G ∉E_3 and G ∉E_5,
p((1 - β (t(y)-f(y)) - β f(y))y) ≤ and
≤ p(1 - (β (t(y)-f(y)) + β f(y))y).
But this implies that G ∈ E_4.
Therefore, G ∈⋃_i=1^7 E_i, as desired.
We shall now prove that G ∈ E_3 < (G) <.
For brevity, the other claims are omitted, but can be proved with an identical method.
Let G ∈ E_3, then
p((1 - β (t(y)-f(y)) - β f(y))y) < and
< p((1 - β (t(y)-f(y)) - β f(y))y).
If β t(y) = 0, the above conditions give a contradiction, so β t(y) > 0.
Assume by contradiction that (G) ≤, and let x be an equilibrium of G.
Then, since β t(y) > 0, (G) <, so by (<ref>) and (<ref>), ^ = 0.
This gives that ^ + ^ = = y, meaning ^ + ℙ() ≥ℙ()y.
Further, it is always true that ^≥ 0.
Therefore, By (<ref>) and the fact that p is increasing,
(G) = p(^ + ^ + ℙ()^)
≥ p(ℙ())
= p(y-((G)(t(y)-f(y))β + f(y)β)y)
Then, starting with our contradiction hypothesis, we perform algebraic operations to take (G) “up” one level of its recursive definition in (<ref>).
This gives that
y - ((G)(t(y)-f(y))β + f(y)β)y ≥
y - ((t(y)-f(y))β + f(y)β)y,
and since p is increasing,
p(y - ((G)(t(y)-f(y))β + f(y)β)y) ≥
p(y - ((t(y)-f(y))β + f(y)β)y).
But then we have the following chain of inequalities
< p(y - ((t(y)-f(y))β + f(y)β)y)
≤ p(y - ((G)(t(y)-f(y))β + f(y)β)y)
≤(G) ≤,
which is a contradiction.
This completes the proof.
We are now equipped to describe the essentially unique equilibrium of any game G; Lemma <ref> gives a restriction on equilibrium accident probability from game parameters, and Lemmas <ref> and <ref> yield driver behavior under this restriction.
This behavior is sufficient to show that the accident probability induced by non-Bayesian drivers is the same as that of Bayesian drivers.
For any game G = (β, y, r) with endogenous accident probability,
(G) = (G).
Lemma <ref> shows that every game G belongs to one of the equilibrium families defined by (<ref>)-(<ref>), meaning we can show this simply using cases.
Equilibrium accident probability is restricted as a function of game parameters by Lemma <ref> for non-Bayesian agents and by <cit.> for Bayesian agents.
Accident probability in both cases is restricted to a single value and we are immediately finished unless G ∈ E_3 or G ∈ E_5.
In the first case, note that Bayesian agents choose ^ = 0, ^ = 0, and ^ = ℙ()y as a consequence of <cit.>.
Similarly, non-Bayesian agents choose ^ = 0, ^ = 0, and ^ = y by Lemma <ref>.
Then, by (<ref>) and (<ref>),
(G) = p(ℙ()y) = (G),
which is the desired result.
This can be shown in a similar manner if G ∈ E_5. completing the proof.
-13mm
For any game G = (β, y, r) with endogenous accident probability,
(G) = (G).
Again by Lemma <ref>, we show this by cases.
Note that for Bayesian agents, (<ref>) simplifies to
(G) = (; x) ^ + (; x) ^
+ (; x) ^ + (; x) ^
+ (; x) ^ + (; x) ^,
and for non-Bayesian agents it simplifies to
(G) = (; x) ^ + (; x)^
+ (; x) ^ + (; x) ^ + (;x) + ^.
Consider the case where G ∈ E_2.
Bayesian agents choose behaviors ^ = 0, ^ = 0, and ^ = p^-1() (as a consequence of <cit.>), and non-Bayesian agents choose ^ = 0, ^ = 0, and ^ = y by Lemma <ref>.
Then, both of the above simplify to give
(G) = r(1-β t(y)/1-β f(y) +r(1-β t(y)) = (G),
which is the desired result.
A series of (tedious) algebraic calculations give the same result in each other case, completing the proof.
§ CONCLUSION
This work considered a class of models describing how human agents respond to road hazard information sharing.
We discussed two independent characteristics of these models, and their effects on equilibrium behavior of drivers.
We first showed that games of this kind with incomplete information can be equivalently interpreted as imperfect information games, removing an assumption on human rationality.
We used this fact to prove our main result, that models with an endogenous accident probability can describe scenarios that those with an exogenous accident probability cannot; namely, that social cost can be increasing with information quality.
Future work could consider additional alternative descriptions of human behavior, and possibly use a heterogeneous population of these different descriptions.
ieeetr
|
http://arxiv.org/abs/2307.01641v1
|
20230704105317
|
Magnetic field information in the near-ultraviolet Fe II lines of the CLASP2 space experiment
|
[
"David Afonso Delgado",
"Tanausú del Pino Alemán",
"Javier Trujillo Bueno"
] |
astro-ph.SR
|
[
"astro-ph.SR"
] |
[email protected]
Instituto de Astrofísica de Canarias, E-38205 La Laguna, Tenerife, Spain
Universidad de La Laguna, Dept. Astrofísica, E-38206, La Laguna, Tenerife, Spain
Instituto de Astrofísica de Canarias, E-38205 La Laguna, Tenerife, Spain
Universidad de La Laguna, Dept. Astrofísica, E-38206, La Laguna, Tenerife, Spain
Instituto de Astrofísica de Canarias, E-38205 La Laguna, Tenerife, Spain
Universidad de La Laguna, Dept. Astrofísica, E-38206, La Laguna, Tenerife, Spain
Consejo Superior de Investigaciones Científicas, Spain
We investigate theoretically the circular polarization signals induced by the Zeeman effect in
the Fe2 lines of the 279.3 - 280.7 nm
spectral range of the CLASP2 space experiment and their suitability to infer solar magnetic fields.
To this end, we use a comprehensive Fe2 atomic model
to solve the problem of the generation and transfer of polarized
radiation in semi-empirical models of the solar
atmosphere, comparing the region of formation of the Fe2 spectral lines with those of the
Mg2 h and k and the Mn1 resonance lines.
These are present in the same near ultraviolet (near-UV) spectral region and allowed
the mapping of the longitudinal component of the magnetic field (B_ L) through several
layers of the solar chromosphere in an active region plage.
We compare our synthetic intensity profiles with observations from the IRIS and CLASP2 missions,
proving the suitability of our model atom to characterize these Fe2 spectral lines.
The CLASP2 observations show two Fe2 spectral lines at 279.79 and 280.66 nm
with significant circular polarization signals. We demonstrate the suitability
of the Weak-Field Approximation (WFA) applied to the Stokes I and V profiles
of these Fe2 lines to infer B_ L in the plage atmosphere.
We conclude that the near-UV spectral region of CLASP2 allows to determine B_ L
from the upper photosphere to the top of the chromosphere of active region plages.
§ INTRODUCTION
Understanding the structure and evolution of the magnetic field throughout the
solar atmosphere is key to understanding the transfer of energy from the bottom
of the photosphere
to the chromosphere and to the million-degree corona. It is thus clear that observations and
diagnostic tools allowing for a reliable and simultaneous inference of
the magnetic field in different layers of the solar atmosphere, from the lower photosphere to
the top of the chromosphere, just below the transition region to the corona, are necessary to
improve our understanding of the physical processes taking place in the atmosphere of our
closest star <cit.>.
Spectral lines encode information about the physical conditions of the plasma in their region
of formation in the solar atmosphere. Many of the lines that form in the outer layers
of the solar chromosphere are located in the UV region of the spectrum, which gives
this spectral region significant potential for magnetic-field diagnostics across the solar atmosphere.
For a recent review on the physics and diagnostic potential of UV spectropolarimetry,
see <cit.>.
In particular, two of the strongest and brightest spectral lines
in the near-UV spectrum, the Mg2 h and k
resonant doublet, are located at 280.35 and 279.64 nm, respectively. The line center of
these spectral lines originate just below the transition region, while their wings form in deeper
layers between the lower
and middle chromosphere <cit.>. Consequently, these
spectral lines are among the most important for the study of the solar chromosphere.
Due to the significant opacity of the Earth's atmosphere at UV wavelengths,
these spectral lines can
only be observed from space. The first spectropolarimetric data on the
Mg2 h and k lines were obtained by the Ultraviolet Spectrometer and Polarimeter
<cit.> onboard the Solar Maximum Mission <cit.>,
which indicated the existence of significant scattering polarization signals
in the far wings of these lines <cit.>.
Since 2013, their intensity spectrum has been systematically observed with the Interface Region Imaging
Spectrograph mission <cit.>, whose results have improved our
knowledge about the thermal and dynamical structure of the upper layers of the solar chromosphere
<cit.>.
Motivated by several theoretical studies on the polarization of the Mg2 h and k lines around 280 nm
(, and ),
the Chromospheric LAyer SpectroPolarimeter <cit.>
suborbital mission was launched
on April 11 2019, achieving unprecedented observations of
the four Stokes parameters of this near-UV spectral region in an active region plage
close to the solar disk center and in a quiet Sun region near the limb.
The analysis of the quiet Sun spectral
profiles confirmed the theoretical predictions on the scattering
polarization signals <cit.>.
The application of the Weak-Field Approximation (WFA) to the circular polarization profiles observed in the plage region allowed for
the mapping of the longitudinal component of the magnetic field (B_ L) across different layers of
the solar chromosphere <cit.>, extended to the photosphere with simultaneous observations
by Hinode/SOT-SP <cit.>. Later, the application of the Tenerife Inversion Code <cit.>
to the CLASP2 Stokes I and V profiles observed in the plage target
allowed for the inference of a stratified model atmosphere, including
its thermal, dynamic, and magnetic structure <cit.>.
Even though the analysis of the CLASP2 data has been limited,
up to now, to the Mg2 and Mn1
lines, these observations show several other spectral lines with significant circular polarization signals.
Among these lines, two Fe2 spectral lines stand out: a relatively weak emission line at 279.79 nm and
an absorption line at 280.66 nm, just at the red edge of the CLASP2 spectral window.
Recent investigations have suggested that the Fe2 lines located
blueward of the
CLASP2 spectral region are of complementary interest for diagnosing
the magnetism of the solar chromosphere <cit.>.
A quantitative study of the polarization signals and magnetic sensitivity of the Fe2 spectral lines
in the near-UV region of the solar spectrum has been recently published by <cit.>,
but the Fe2 lines located in the CLASP2 spectral region were not included in their work.
The pioneering results of the CLASP2 mission have proven the capabilities
of the Mg2 h and k spectral lines to study the magnetic
field in the upper layers of the chromosphere.
There has been a growing interest in the planning and development of space missions
to perform full-Stokes observations of this spectral window, such as the Chromospheric Magnetism
Explorer <cit.>. The analysis of the lines from species other than Mg2 found in this spectral
region is critical to take full advantage of the possibilities that future space missions bring
to map the evolution of the magnetic field throughout the solar atmosphere.
In this paper we study the formation of the Fe2 spectral
lines in the CLASP2 spectral window, as well as
their suitability to infer B_ L and how they complement
the magnetic field information that can be inferred
from the Mg2 and Mn1 lines in the same spectral window <cit.>. In
Section <ref> we describe the atomic
models and our strategy to solve the radiation transfer (RT)
problem. In Section <ref> we analyze the
emergent synthetic intensity profiles, studying the suitability
of our model atom by comparing with observations
at different lines of sight (LOS).
In Section <ref> we study the formation of
the circular polarization profiles of the Fe2 lines
at 279.79 and 280.66 nm, and the suitability of the WFA
to infer B_ L. We then apply the WFA to the CLASP2
data of these Fe2 lines. Finally, in Section <ref> we present our conclusions.
§ FORMULATION OF THE PROBLEM
Despite the significant number density of Fe2 in the solar atmosphere, producing some of the
strongest spectral lines in the near-UV solar spectrum (e.g., the resonant multiplet at 260 nm),
the Fe2 lines in the 279.3 - 280.7 nm
spectral window are relatively weak (see Table <ref>
for their atomic data). In particular, both Fe2 lines at 279.79 and 280.66 nm are intercombination
lines. Because they form in the scattering wings of the Mg2 h and k doublet,
including the latter is necessary for an accurate modeling of these Fe2 lines. Moreover, we also
include an atomic model for the Mn1 resonant triplet, also present in this spectral window.
As it is the case with most of the atomic lines located in the near-UV solar spectrum, the modeling of
their intensity and polarization requires to account for strong departures from local thermodynamic equilibrium
<cit.>. In particular, one must jointly solve the statistical equilibrium equations (SEE) describing
the populations and coherence between atomic sublevels, and the RT equations,
describing the propagation of the electromagnetic radiation as it travels through the atmospheric plasma
<cit.>. The Mg2 h and k
spectral lines show significant partial frequency redistribution (PRD) effects, and the modeling of their
polarization requires accounting for J-state interference <cit.>.
Therefore, we account for J-state interference in our Mg2 atom model <cit.>,
but we neglect it in the cases of the Fe2 and Mn1 atom models <cit.>.
We solve the RT problem accounting for all these physical ingredients with the HanleRT code
<cit.>.
Our magnesium model atom comprises three Mg2 terms, including the resonant doublet with PRD and the
UV subordinate triplet with complete frequency redistribution (CRD), and the ground term of Mg3.
This atomic model is the same used by <cit.> and <cit.>. Our atomic model for iron is the one described in
<cit.>, to which we have added the atomic transition corresponding to the line at 279.79 nm,
between the levels a^4F_3/2 - z^6D^∘_3/2 (the upper level is in the
upper term of the resonant multiplet at 260 nm). The oscillator strength of this transition was
taken from <cit.>.
Moreover,
we determine the rate of inelastic collisions with free electrons and the photoionization cross-section
by applying the approximations of <cit.>, <cit.>, and <cit.>. We
adjust the collisional rates through an ad-hoc factor in order to fit the line core intensity flux
of the Fe2 resonant transitions between 258.6 and 260.25 nm in the quiet Sun C model of
<cit.> to the observed flux of the solar analog α Cen A <cit.>.
An accurate modeling of the Mn1 resonant doublet requires accounting for the hyperfine
structure <cit.>. The impact of the HFS is particularly relevant
regarding the circular polarization profiles. However, it is not critical
for estimating the region of formation of the lines. In this work we thus neglect the HFS, using
an atomic model with four Mn1 levels and the ground level of Mn2, with the
resonant triplet in CRD.
Accounting for PRD and J-state interference in Mg2 and, at the same time, for the
large number of levels and transitions in the Fe2 model atom, makes the simultaneous
solution of the RT problem prohibitively expensive in terms of computing resources. Moreover,
solving the non-LTE problem for both Fe2 and Mg2 affects the ionization
balance of the latter (one of the reasons being the lack of Fe1 as a source of background
opacity in the far-UV). Therefore, we have opted for the following strategy:
* Calculate jointly the population balance for Mg2 and Mn1.
* Calculate, independently, the population balance for Fe2.
* Solve jointly the non-magnetic RT problem with the three atomic models by fixing the populations
calculated in steps 1 and 2.
* For the magnetic field problem, use a reduced model of Fe2 with just five
atomic levels (the upper and lower levels of the transitions at 279.79 and 280.66 nm, and the ground level of Fe3), still fixing the populations to those calculated in steps 1 and 2.
In this work we solve the RT problem in the 1D, plane-parallel, semi-empirical C and P atmospheric models described in
<cit.>, representative of the quiet Sun and a
plage region.
cccccc
Atomic data and effective Landé factor for Fe2 atomic transitions in the spectral region of CLASP2.
λ [nm] Transition Levels energy A_ul [s^-1] g_eff
279.471 a^4G_9/2 - z^4I^∘_11/2 25 805 - 61 587 1.3·10^7 0.78^ a
279.787 a^4F_3/2 - z^6D^∘_5/2 3 117 - 38 858 1.9·10^4 2.60^ b
280.012 b^2H_9/2 - y^4F^∘_7/2 26 352 - 62 065 1.5·10^7 0.45^ b
280.485 a^2F_5/2 - x^4D^∘_5/2 27 620 - 63 272 1.6·10^6 1.01^ b
280.661 a^2F_7/2 - x^4D^∘_7/2 27 314 - 62 945 3.2·10^6 1.38^ b
^ aLS coupling, accounting for the upper level coupling between ^4H^∘_11/2 and ^4G^∘_11/2
^ bexperimental <cit.>
§ INTENSITY PROFILES
We compare, qualitatively, the synthetic intensity profiles with available observations
(see Fig. <ref>). Within the CLASP2 spectral window we can find five
Fe2 spectral lines showing significant absorption or emission signals in both
the theoretical and observed intensity profiles (indicated with gray-dotted lines in the figure).
The upper panel of Fig. <ref> shows the theoretical profile for an LOS
with μ=cosθ=0.1 (with θ the heliocentric angle of the observed point)
together with the CLASP2 observation
of a quiet Sun region
close to the solar limb after averaging over 2 arcsec along the slit. The lower panel shows a comparison of the theoretical profiles for
an LOS with μ=1.0 together with the average of all the profiles contained in a whole map of an IRIS observation of the quiet Sun
at the solar disk center from October 4 2022.
Despite the limitations of using semi-empirical and static atmospheric models,
the synthetic spectral lines show a qualitative behavior close to that of the observations.
One exception to this is the Fe2 line at 279.47 nm which seems
to be not accurately represented by our atomic model.
In the observations this spectral line is always in absorption while in our calculations
it shows clear emission features both at disk-center and close to the limb.
There is a notable difference between synthetic and observed profiles in the Mg2 h
and k wings (see Fig.<ref>). This disagreement is due to the different temperature in
the formation region of the wings (upper photosphere) between the semi-empirical models and the actual
atmospheric plasma. In the quiet-Sun (plage) observations, this temperature is lower
(higher) than in the FAL-C (FAL-P) model. Note, however, that this difference in the quiet Sun observations
is significantly diminished when comparing with the disk center observation from IRIS (bottom panel of
Fig.<ref>). In this case we average a larger number of profiles and, consequently,
the semi-empirical “average” quiet Sun model FAL-C is much closer to the average of the quiet Sun
observation.
The Fe2 lines at 280.01 and 280.66 nm show significant absorption for an LOS at disk center
and a relatively weaker absorption for an LOS toward the limb, for both theoretical and observed
profiles. The Fe2 lines at 279.79 and 280.48 nm show a relatively weak emission at disk
center, which becomes more significant for LOS closer to the limb. We emphasize, in particular,
that the model successfully reproduces the behavior for different LOS for the Fe2
line at 279.79 nm, almost imperceptible for the disk center LOS and with a clear emission
for an LOS close to the limb.
The CLASP2 observations in the plage region also show a significant emission feature in the
Fe2 line at 279.79 nm <cit.>, despite the location of the plage being
close to disk center (between μ=0.8 and 0.6). A spectral synthesis in the FAL-P model,
representative of a solar plage, also shows a remarkable emission
(see middle panel in Fig. <ref>).
These results support the suitability of the Fe2 atomic
model presented in <cit.> for the present research.
§ CIRCULAR POLARIZATION PROFILES
In this section we study the formation of the circular polarization profiles of the
Fe2 lines in the CLASP2 spectral window. To this end, we impose
an ad-hoc exponential stratification of the magnetic field (see the right panel of Fig. <ref>) in the FAL-C model atmosphere.
None of the Fe2 spectral lines mentioned in Sec. <ref> show any linear polarization signals, either in the CLASP2 observations or in our numerical modeling.
These lines show a quenching of the linear polarization in the
Mg2 h and k wings.
This depolarization appears to be insensitive to artificial
changes of the rate of the depolarizing collisions of the Fe2 atomic model, indicating that the lines are just
depolarizing the Mg2 wings and do not show intrinsic polarization. Therefore,
from now on we focus on the circular polarization profiles.
§.§ Theoretical Circular Polarization Profiles.
In the CLASP2 observations of an active region plage, only two of the Fe2 spectral
lines mentioned in Sec. <ref> show clear circular polarization signals. These are
the Fe2 lines at 279.79 and 280.66 nm. The Fe2 lines at 279.471 and
280.012 nm have relatively small effective Landé factors (0.78 and 0.45, respectively),
and their circular polarization signals may be below the noise level as shown in Fig. <ref>. The Fe2
at 280.485 nm is very weak for an LOS close to disk center (the plage region
observed by CLASP2 is at μ∼0.8), which can explain the absence of clear circular
polarization signals. Therefore, in the following we focus on the Fe2 lines
at 279.79 and 280.66 nm.
To study in detail the circular polarization signals in these two lines, we
solve the RT problem in the FAL-C model atmosphere for a disk-center LOS as described in
Sec. <ref>, assuming a vertical magnetic field with strength decreasing
exponentially with height, from 200 G in the bottom of the photosphere to 40 G just below the
transition region (see the right panel of Fig. <ref>).
Under proper conditions, the WFA provides a fast method to estimate B_ L.
For it to be applicable, the Doppler width of the line
Δλ_D = λ_0/c√(2kT/m + v_ m^2), (with T
and v_ m the temperature and turbulent velocity, respectively, c the speed of light,
k the Boltzmann constant, m the mass of the atom, and λ_0 the line's wavelength), must be much larger than the
magnetic Zeeman splitting
(Δλ_B = 4.6686·10^-13Bλ_0^2, with B the magnetic field strength
in gauss and λ_0 in angstroms); i.e.
g_ effΔλ_B/Δλ_D≪ 1.
When this condition is met, and B_ L is constant across the line formation region,
the circular polarization profile is proportional to the wavelength
derivative of the intensity
(e.g., <cit.>)
V = -4.6686·10^-13g_ effλ_0^2B_L∂ I/∂λ.
The first and second columns of Fig. <ref> show the circular
polarization of the Mg2 k and h lines (upper row), the Mn1 lines at
279.91 and 280.19 nm (middle row), and the Fe2 lines at 279.79 and 280.66 nm
(bottom row). The colored background shows the logarithm of the normalized
contribution function for Stokes V, which indicates the contribution of each region
in the model atmosphere (see the height in the right axis in each panel) to the
emergent circular polarization profiles. Red (blue) color indicates positive (negative)
contributions to the circular polarization.
As shown in previous works, the contribution function indicates that the circular
polarization in the inner lobes of the Mg2 h and k lines form in the upper
chromosphere, while the circular polarization in the outer lobes forms in a relatively
extensive region in the middle chromosphere of the FAL-C model. The application of
the WFA independently to each of these spectral regions allows us to
fit the circular polarization profiles of all the selected spectral lines.
We have excluded the outer lobes
of the Mg2 k line in this work because the blue outer lobe is blended with a
Mn1 line, impacting its circular polarization signal. The B_ L values inferred
with the WFA are found in the model atmosphere at those heights with the largest values
of the contribution function (compare the contribution function in the first and
second columns of Fig. <ref> with the colored dots in the rightmost panel).
The region of formation of the circular polarization of the Mn1 resonant
lines is below that of the outer lobes of the Mg2 h and k, thus between the
lower and middle chromosphere in FAL-C.
Moreover, the WFA
can be applied to the observations using the effective Landé factor calculated
assuming LS coupling without HFS <cit.>. All the results shown
here regarding the Mg2 and Mn1 lines in this spectral window agree
with previous results
(, ; ).
The Fe2 line at 279.79 nm shows a circular polarization profile with
four lobes, due to what seems to be a self-absorption feature in the center of
the line (see the lower panel of Fig. <ref>). The inner lobes resulting from the
numerical calculation cannot be observed with the CLASP2 spectral sampling
(∼0.01 nm), and thus the CLASP2 observations show a two-lobes
circular polarization profile. The contribution function indicates that the circular
polarization of this Fe2 line forms mainly at heights between 0.3 and 0.5 Mm in the FAL-C model,
which corresponds to the upper photosphere. The B_ L inferred with the WFA
corresponds to the actual value in the model at a height of 0.2 Mm, just below
the formation region deduced from the contribution function.
The Fe2 line at 280.66 nm, the one at the red edge of the CLASP2 spectral
window, shows a more typical antisymmetric shape with two lobes, with
signals larger than those of the Fe2 line at 279.79 nm. From the
contribution function, the region of formation of the circular polarization appears
to be located at heights between 0.2 and 0.7 Mm in the upper photosphere of the FAL-C model.
The B_ L inferred with the WFA corresponds to the field expected at those heights.
As illustrated at the beginning of this section, the WFA can only be applied if
Eq. (<ref>) is satisfied. In Fig. <ref> we show
the stratification of the ratio
g_ effΔλ_B/Δλ_D for the Fe2 lines
at 279.79 and 280.66 nm for different uniform magnetic fields, for both the
FAL-C and FAL-P models.
In typical applications to solar spectropolarimetry, the weak field
condition of the WFA is often satisfied when the ratio in Eq. <ref> is smaller than about 0.5.
Consequently, for these lines that form around the temperature
minimum (∼0.5 Mm, which would be the worst case scenario), the WFA would be
suitable for magnetic fields up to about 500 G for the Fe2 line at
279.79 nm and up to about 1000 G for the Fe2 line at 280.66 nm.
In summary, the Fe2 lines at 279.79 and 280.66 nm
form in the upper photosphere, considerably deeper than the Mg2 and Mn1 in the same
spectral range, and the WFA can be applied for a quick inference of B_ L
for longitudinal components of the magnetic field up
to 500 G (for the 279.79 nm line) and up to 1000 G (for the 280.66 nm line).
§.§ Application to the CLASP2 data
The intensity and circular polarization profiles of the Mg2 h and k lines
and of the Mn1 resonant lines observed by CLASP2 in a solar plage region
have been analyzed in <cit.>, mapping the longitudinal component
of the magnetic field from the photosphere
to the upper chromosphere. The B_ L in the lower chromosphere was inferred by
applying the WFA to the Mn1 resonant lines. The B_ L in the middle
chromosphere was inferred by applying the WFA to the outer lobes of the
Mg2 h line. The B_ L in the upper chromosphere was inferred by applying the WFA to the
inner lobes of the Mg2 h and k lines. The determination of the photospheric
B_ L, however, resulted from a Milne-Eddington inversion of the Fe1 lines
at 630.15 and 630.25 nm, from data acquired in a simultaneous observation by the Hinode
SOT-SP <cit.>. These lines form in the lower layers of the solar
photosphere <cit.>. In this section we extend the work
of <cit.> by adding the inference of B_ L by applying the WFA
to the Fe2 lines at 279.79 and 280.66 nm, as we did with the synthetic
profiles in Sec. <ref>.
Figure <ref> shows the fractional circular polarization in one pixel of the
CLASP2 observation of a plage region. We apply the WFA to the same seven[Note that
we distinguish two spectral regions for the Mg2 h
line, and thus there are seven spectral regions in total.] spectral
regions indicated in Sec. <ref>.
We get B_ L between 179 and
193 G in the upper chromosphere (from the inner lobes of the Mg2 h and k
lines), 227 G in the middle chromosphere (from the outer lobes of the Mg2
h line), and between 370 and 378 G in the lower chromosphere (from the Mn1
lines). To this B_ L stratification we can add 646 G in the upper photosphere
(derived from the Fe2 line at 280.66 nm).
As expected, we obtain a B_ L which increases with
the depth of the region of formation of each spectral range.
In contrast, we find that the inferred B_ L is 219 G
when the WFA is applied to the
Fe2 at 279.79 nm. This is in apparent contradiction with the findings in
Sec. <ref>, as this value is very close to the one obtained
from the outer lobes of the Mg2 h line, which forms in the middle
chromosphere whereas, according to the numerical modeling of Sect.<ref>,
the Fe2 279.79 nm line should be the one that forms in the lowest part of the atmosphere,
among the lines included in this work. This discrepancy can be understood by analyzing the formation of
this spectral line in two semi-empirical models representative of the the quiet Sun
(FAL-C) and a plage region (FAL-P).
In Fig. <ref>, we show the absolute value of the logarithm of the contribution function
for the two Fe2 (279.79 and 280.66 nm) and the Mn1 (279.91 nm) lines calculated in the FAL-C
and FAL-P model atmospheres. The blue regions correspond to the largest contribution to the circular polarization.
In the quiet Sun model (top row of Fig. <ref>) the largest contribution for the two
Fe2 spectral lines is located at very similar heights, deeper in the atmosphere compared to
the Mn1 spectral line. However, in the plage region model (bottom row of Fig. <ref>)
the largest contribution for the 279.79 nm line is higher in the atmosphere with respect to the other
spectral lines. In this model this Fe2 line forms at a height which is very similar to
that of the Mn1 line. Instead, the Fe2 280.66 nm line formation region is found always
in deeper layers than the Mn1 line.
In summary, while in quiet Sun conditions both Fe2 lines form in relatively similar regions,
in the plage region the formation of the Fe2 line at 279.79 nm is shifted upwards relative to
the other lines. This explains why, when applying the WFA approximation to the CLASP2
data in the plage region, we infer B_ L values from the Fe2 line at 279.79 nm
that are smaller than those inferred from both the Fe2 line at 280.66 nm and the
Mn1 lines.
With this more complete picture of the region of formation of the circular polarization
of the lines in the CLASP2 spectral window we apply the WFA to the seven spectral
regions specified in Fig. <ref> in all pixels of the CLASP2
plage region observation. Fig. <ref> shows the inferred B_ L
along the spectrograph slit. We include in the figure the B_ L WFA
inference from the Mg2 h and k lines (top and middle chromosphere, red and black,
respectively) and the Mn1 lines (lower chromosphere, blue
dots), which were presented in <cit.>, as well as the result of
their inversion of the Hinode data (light green curve). The B_ L values we infer
from the Mg2 and Mn1 spectral lines are very similar to those shown
in <cit.>.
The important new result in Fig. <ref> is the B_ L resulting from the
application of the WFA to the Fe2 lines at 279.79 and 280.66 nm (dark green and
orange dots). The B_ L inferred from the Fe2 line at 280.66 nm is larger
than for the other lines in the CLASP2 spectral window, and relatively close to the
Milne-Eddington inversion of the Hinode data. This is in agreement with the expected
photospheric region of formation of this spectral line. The B_ L inferred from the
Fe2 line at 279.79 nm is instead similar to the values inferred for the middle
chromosphere inside the plage region (slit positions
from -99 to ∼ -70 arcsec and from ∼ -50 to ∼25 arcsec). Instead,
in the nearby quiet Sun the inferred B_ L is similar to the inference from the
Fe2 line at 280.66 nm, which is fully consistent with the change in the region of
formation shown in Fig. <ref>.
§ CONCLUSIONS
We investigated theoretically the formation of the intensity and circular polarization
profiles of the Fe2 lines located in the CLASP2 spectral window, comparing
their region of formation with those of the Mg2 and Mn1 lines in the
same spectral window, which were considered in previous studies.
To this end, we solved the RT problem in
a semi-empirical model atmosphere representative of the quiet Sun, to which we added an ad-hoc
stratification of the magnetic field. In particular, we studied the formation of those
Fe2 spectral lines and their suitability to infer B_ L by applying the WFA.
We demonstrated the validity of this approximation and apply it to the CLASP2 data, extending
the mapping of the longitudinal component of the magnetic field in <cit.> to additional layers of the solar atmosphere, between
the upper photosphere and the lower chromosphere.
We compared the intensity emerging from the FAL-C model atmosphere,
with observations by IRIS and CLASP2,
for two LOS, one at disk center and one close to the limb.
With the exception of the 279.47 nm line, all the Fe II lines in this spectral region show
good qualitative agreement between the
numerical calculations and the observations. This indicates that the Fe2
model atom presented in <cit.>, is
suitable to model the Fe2 lines in this spectral window. In particular, the
model reproduces the peculiar behavior of the Fe2 line at 279.79 nm, almost
absent at disk center, but with a clear emission at the solar limb or in active
regions such as plage regions. We note that there is a Fe2 line at 369.94 nm,
in the red wing of the Ca2 H line, which shows the same behavior, with
an absorption profile at disk center, but an emission profile closer to the limb
<cit.>. In a future investigation,
we will study the formation of this spectral line,
which can be observed with ground-based facilities such as the Daniel K. Inouye Solar Telescope <cit.>.
We have found that only two of these five Fe2 lines (279.471, 279.787, 280.012, 280.485, and 280.661 nm) show circular polarization
signals above the noise level in the CLASP2 observations.
By solving
the RT problem in the FAL-C model (Sec. <ref>),
we showed that both lines form in the photosphere
when the thermodynamical conditions of the observed target
are similar to those of the quiet Sun.
We showed that the WFA of these lines can be applied for B_ L
up to 500 G for the 279.79 nm line and up to 1000 G for the 280.66 nm line.
We also found that the exact
region of formation of the Fe2 line at 279.79 nm
depends on the particular physical conditions of the atmospheric plasma. While in the
typical conditions of the quiet Sun this line forms in the photosphere, like the Fe2
line at 280.66 nm, for the typical conditions of a solar plage the region of formation
is shifted upward, closer to the Mg2 h and k line wings, even above the
Mn1 lines which form in the lower chromosphere. This difference in height of
formation explains the apparent discrepancy in the inferred B_ L
which, in the plage region, is smaller for the Fe2 line at 279.79 nm than for
both the Fe2 line at 280.66 nm and the Mn1 lines.
The Fe2 line at 280.66 nm appears instead to be consistently
photospheric. It has a formation region around the temperature minimum, and the inferred
B_ L is close to that shown in <cit.> obtained with a
Milne-Eddington inversion of simultaneous observations with Hinode/SOT-SP of the
Fe1 lines at 630.15 and 630.25 nm.
It is important to emphasize that there are additional spectral lines in the
CLASP2 spectral window, from species other than Fe2, which show significant
circular polarization signals. Our ongoing studies of these signals should help
complete the mapping of B_ L from the upper photosphere to the top of the chromosphere.
The results of this work demonstrate the suitability of the Fe2 lines at
279.79 and 280.66 nm to supplement the information
provided by the Mg2 and Mn1 lines of the CLASP2
spectral window to map B_ L across the solar atmosphere
<cit.>.
Therefore, we conclude that it is possible to retrieve information about B_ L
from the photosphere to the upper chromosphere just below the transition region from
spectropolarimetric observations in the spectral window of the Mg2 h and k lines.
This finding strongly emphasizes the importance and timeliness for deploying a new space mission equipped with CLASP2-like capabilities, as this would likely lead to a breakthrough in our ability to probe
the magnetism of the solar atmosphere from the upper photosphere to the base of the corona.
While our conclusions strongly emphasizes the unique diagnostic potential of this narrow near-UV spectral region, future investigations targeting the mapping of the magnetic field throughout the solar atmosphere will obviously benefit from complementary observations from multiple instruments, both space-borne and ground-based, of chromospheric diagnostics in the UV, visible, and infrared solar spectrum.
We thank Ryohko Ishikawa (NAOJ) and Hao Li (IAC) for useful discussions on the
CLASP2 Stokes signals and their interpretation.
We are grateful to the referee for suggestions that have improved
the presentation of this paper.
We acknowledge the funding received from the European Research Council (ERC)
under the European Union's Horizon 2020 research and innovation programme (ERC
Advanced Grant agreement No 742265). T.P.A.'s participation in the publication is
part of the Project RYC2021-034006-I, funded by MICIN/AEI/10.13039/501100011033,
and the European Union “NextGenerationEU”/RTRP.
CLASP2 is an international partnership between NASA/MSFC, NAOJ, JAXA, IAC, and IAS;
additional partners include ASCR, IRSOL, LMSAL, and the University of Oslo.
The Japanese participation was funded by JAXA as a Small Mission-of-Opportunity Program,
JSPS KAKENHI Grant numbers JP25220703 and JP16H03963, 2015 ISAS Grant for Promoting
International Mission Collaboration, and by 2016 NAOJ Grant for Development Collaboration.
The USA participation was funded by NASA Award 16-HTIDS16_2-0027.
The Spanish participation was funded by the European Research Council (ERC) under the
European Union's Horizon 2020 research and innovation programme
(Advanced Grant agreement No. 742265). The French hardware participation
was funded by CNES funds CLASP2-13616A and 13617A.
aasjournal
|
http://arxiv.org/abs/2307.02535v1
|
20230705180002
|
Improved statistics for F-theory standard models
|
[
"Martin Bies",
"Mirjam Cvetič",
"Ron Donagi",
"Marielle Ong"
] |
hep-th
|
[
"hep-th",
"math.AG"
] |
decorations.markings, calc, patterns, arrows
decorations.pathreplacing,calligraphy
stuff_fill_green=[circle,draw,fill=green!]
stuff_fill_red=[circle,draw,fill=red!40]
stuff_fill_blue=[circle,draw,fill=cyan!70]
stuff_fill_connect=[circle,draw,fill=orange!70]
stuff_fill_high=[circle,draw,fill=purple!70]
plain
thrmTheorem[section]
definition
defn[thrm]Definition
exmp[thrm]Example
fig[thrm]Figure
cor[thrm]Corollary
lemLemma
prop[thrm]Proposition
*noteNote
*remarkRemark
*conseqConsequence
algoAlgorithm
break
0pt
:
break
addpunctgobble
myproof[1][]
K_B_3
Improved statistics for F-theory standard models
Martin Bies^1, Mirjam Cvetič^2,3,4, Ron Donagi^3,2, Marielle Ong^3
^1Department of Mathematics, RPTU Kaiserlautern-Landau,
Kaiserslautern, Germany
^2Department of Physics and Astronomy, University of Pennsylvania,
Philadelphia, PA 19104-6396, USA
^3Department of Mathematics, University of Pennsylvania,
Philadelphia, PA 19104-6396, USA
^4Center for Applied Mathematics and Theoretical Physics, University of Maribor,
Maribor, Slovenia
Much of the analysis of F-theory-based Standard Models boils down to computing cohomologies of line bundles on matter curves. By varying parameters one can degenerate such matter curves to singular ones, typically with many nodes, where the computation is combinatorial and straightforward. The question remains to relate the (a priori possibly smaller) value on the original curve to the singular one. In this work, we introduce some elementary techniques (pruning trees and removing interior edges) for simplifying the resulting nodal curves to a small collection of terminal ones that can be handled directly. When applied to the QSMs, these techniques yield optimal results in the sense that obtaining more precise answers would require currently unavailable information about the QSM geometries. This provides us with an opportunity to enhance the statistical bounds established in earlier research regarding the absence of vector-like exotics on the quark-doublet curve.
§ INTRODUCTION
String Theory, a framework that seeks to unify quantum mechanics and general relativity, encompasses various regimes that offer valuable insights into the fundamental nature of the universe. Among these regimes, F-theory stands out as a non-perturbative approach that has garnered significant attention <cit.>. In this paper, we aim to enhance our comprehension of a prominent category of F-theory Standard Models known as F-theory QSMs <cit.>, which represent the largest known class characterized by gauge coupling unification and the absence of chiral exotics. Our primary focus lies in the study of their vector-like spectra. Through this investigation, we make significant strides in refining our previous findings <cit.>, which centered around the arithmetic aspects of Brill-Noether theory of root bundles on nodal curves.
F-theory is based on the powerful interplay between algebraic geometry and theoretical physics <cit.>. Here, singular elliptic fibrations play a pivotal role and significant progress has been made in studying these geometries over the years. An overview of the developing connections between F-theory and the Standard Model can be found in <cit.>. For a broader and relatively recent overview, see <cit.> and the references therein.
This research paper focuses on 4-dimensional compactifications of F-theory with 𝒩 = 1 supersymmetry for the purpose of phenomenological investigations. Our goal is to construct of models of partial physics that closely align with the Standard Model of particle physics. We recall that 4-dimensional F-theory compactifications with 𝒩 = 1 supersymmetry exhibit chiral (super)fields in a specific representation 𝐑 of the gauge group, as well as their charge conjugate counterparts in the representation 𝐑. The difference in the number of the chiral fields in representations 𝐑 and 𝐑 is referred to as the chiral spectrum. Meanwhile, the separate numbers of fields in the representations 𝐑 and 𝐑 provide the vector-like spectrum. Since the Higgs is a vector-like pair, the study of vector-like pairs is inevitable in the quest for F-theory Minimal Supersymmetric Standard Models (MSSMs).
Significant efforts in perturbative String Theory have been dedicated to finding Standard model compactifications. This includes extensive work on the E_8 × E_8 heterotic string <cit.> and intersecting branes models in type II String Theory <cit.> (also refer to <cit.> and references therein). While these perturbative String Theory compactifications realized the gauge sector of the Standard Model with its chiral spectrum, many constructions have shortcomings, such as the presence of exotic chiral or vector-like matter. The models that surpassed these obstacles and provided the first globally consistent construction of the MSSM in String Theory can be found in <cit.>. For further insight into the intricate global conditions relating to slope-stability of vector bundles, additional details are offered in <cit.>.
The chiral fermionic spectrum of 4d 𝒩 = 1 F-theory compactifications is well understood. The physics is encoded in the geometry of an elliptically fibered Calabi-Yau space π Y_4↠ B_3 and the chiral spectrum is fixed by the background gauge flux G_4 = dC_3 ∈ H^(2,2) (Y_4) <cit.>, where C_3 refers to the internal C_3 profile in the dual M-theory geometry. Consequently, many tools were developed to construct and count the primary vertical subspace of G_4 configurations and to work out the physical quantities determined by such fluxes <cit.>. Computationally, the most challenging condition is the G_4-flux quantization condition G_4 + 1/2 c_2(Y_4) ∈ H^4_ℤ(Y_4). One often examines necessary conditions for a specific G_4-flux to satisfy this quantization, and then proceeds with the resulting flux candidates under the assumption that the quantization condition is indeed satisfied. In applications of such toolkits, many globally consistent chiral F-theory models were presented <cit.>. This exploration culminated in the largest known group of explicit string vacua that realize the Standard Model gauge group with the exact chiral spectrum and gauge coupling unification. This class of F-theory models is known as the Quadrillion F-theory Standard Models (QSMs) <cit.>. The G_4-flux candidate in the F-theory QSMs satisfies “only” necessary conditions for flux quantization. We proceed under the assumption that the quantization condition is satisfied. This G_4-flux candidate cancels the D3-tadpole and ensures that the U(1) gauge boson remains massless.
The vector-like spectrum in F-theory depends not only on G_4 but also on C_3. We refer to the F-theory “object” dual to C_3 as an F-theory gauge potential. Such gauge potentials can be thought of as elements of the Deligne cohomology of Y_4 (see <cit.> and references therein). Previous work, described in <cit.>, utilizes the fact that certain subsets of the Deligne cohomology can be expressed in the Chow ring. This allowed us to compute line bundles L_𝐑 on the matter curves C_𝐑 in B_3. These line bundles can be interpreted as the localization of gauge flux on matter curves C_𝐑 in the dual IIB picture, resulting in massive vector-like pairs on these curves. The massless zero modes on the matter curves are given by the sheaf cohomology groups of the line bundle L_𝐑, where h^0(C_𝐑, L_𝐑) represents the number of massless chiral superfields in the representation 𝐑 on C_𝐑, and h^1 (C_𝐑, L_𝐑) represents the number of massless chiral superfields in the representation 𝐑 on C_𝐑.
By construction, the line bundles L_𝐑 are twists of one of the many spin bundles on C_𝐑. We recall that there are 2^2g inequivalent spin bundles on a smooth, irreducible genus g curve <cit.>. Thus, one must investigate which spin bundles are compatible with the F-theory geometry, as well as their origin in the Deligne cohomology. To the best of the authors' knowledge, such a study has not been conducted and the answer is unknown. In similar spirit, one usually approaches vector-like spectra by lifting an existing G_4-flux to a gauge potential in the Deligne cohomology. Such lifts are not unique due to the intermediate Jacobian of Y_4. It can also be hard to find a single lift in many situations. For instance, this happens in the QSMs <cit.>, which is the origin of the root bundle program initiated in this work. Should we surpass these challenges, we still face a rather complicated interplay between the complex struture moduli of Y_4 and the line bundle cohomologies h^i( C_𝐑, L_𝐑 ). To illustrate this challenge, let us mention that even with advanced algorithms <cit.> and powerful supercomputers, the necessary computations could not be performed in realistic compactification geometries. So, the early works <cit.> focused on models with computationally simple geometries even though the compactifications exhibited unrealistically large numbers of chiral fermions.
Machine learning techniques have and are being employed in the string theory context. Such works include but are not limited to <cit.>. For a broader overview, the interested reader may wish to consult <cit.> and the references therein. In this spirit, in <cit.>, machine learning techniques were employed to systematically investigate the complex structure dependence of line bundle cohomologies. A large dataset <cit.> of line bundle cohomologies for various complex structure moduli was generated using algorithms from <cit.>. By combining data science techniques and well-known results about Brill-Noether theory <cit.> [For a contemporary presentation of Brill-Noether theory, the interested reader is referred to <cit.>. An earlier use of Brill-Noether theory in F-theory can be found in <cit.>.], a quantitative exploration of the jumps in charged matter vector pairs in relation to the complex structure moduli of the matter curve C_𝐑 was conducted.
Inspite of the challenges we face, we find inspiration in the attractive aspects of F-theory QSMs <cit.> and the goal of providing explicit realizations of F-theory MSSMs. Therefore, our focus is on exploring the vector-like spectra within F-theory QSMs, specifically those localized on their five matter curves. This investigation was initiated in a previous study <cit.> and subsequently expanded upon in later works <cit.>. This program was recently summarized in <cit.>. For yet more background information, the interested reader may consult <cit.>.
The first obstacle that we encounter is the lack of knowledge regarding the lift to the Deligne cohomology of the proposed G_4-flux candidate in the F-theory QSMs. The second challenge involves allowing only for twists of spin bundles that align with this F-theory compactification. As we do not have answers to either of these questions, we have established a necessary condition that all induced line bundles L_𝐑 must satisfy. The initial study <cit.> revealed that on three of the five matter curves in the QSM geometry, the line bundle L_𝐑 must necessarily be a fractional power P𝐑 of the canonical bundle on that particular matter curve C_𝐑. On the remaining two curves, the line bundles are, in addition, modified by contributions from Yukawa points. Such fractional power line bundles are known as root bundles, which are generalizations of spin bundles. Like spin bundles, root bundles are generally not unique. The mathematics suggests that different root bundles may arise from non-equivalent F-theory gauge potentials in Deligne cohomology; all of which produce the same G_4-flux. However, not all root bundles that satisfy our constraints on C_𝐑 are necessarily induced by F-theory gauge potentials. In other words, just like the spin bundles, only specific root bundles on the matter curves are expected to be consistent with the F-theory geometry Y_4. Therefore, an F-theory gauge potential gives rise to a set { P_𝐑}, which includes one root bundle for each matter curve C_𝐑. By repeating this process for all physically relevant F-theory gauge potentials, it is expected that only a subset of all root bundles on C_𝐑 will generally be obtained. Conversely, if we are given a set of root bundles { P_𝐑} (one on each matter curve), then this set may not originate from an F-theory gauge potential. Namely, it is possible for one of the root bundles to not be induced at all or not be induced in combination with one of the other roots.
Determining which root bundles are physical, i.e. induced from an F-theory gauge potential, is crucial for gaining a comprehensive understanding of the physics involved. However, this task presents a significant challenge. Instead of directly addressing this question, the study conducted in <cit.> took a systematic approach by examining all root and all spin bundles on each matter curve. By adopting this method, it was possible to assign a specific root bundle to each matter curve (except for the Higgs curve) in a particular QSM geometry <cit.> such that their cohomologies exhibit the desired MSSM vector-like spectra. These bundles are obtained by deforming the smooth and irreducible physical matter curve C_𝐑 into a nodal curve C^∙_𝐑 and describing the root bundles on the latter using a diagrammatic representation called limit roots <cit.>. Counting the global sections of limit root bundles on the complete blow-up of C^∙_𝐑 can be accomplished using the techniques developed in <cit.>.
In the expanded study conducted in <cit.>, a computer algorithm was utilized to count all root bundles with three global sections on the complete blow-up of C^∙_𝐑 in 33 different families of QSM geometries. To accomplish this, families of toric spaces B_3( Δ^∘ ) associated with polytopes Δ^∘ from the Kreuzer-Skarke list <cit.> were examined (see also <cit.> for a different study of these geometries in a string theory context). These toric spaces represent alternative desingularizations of toric K3-surfaces <cit.>. The study revealed that the structure of a canonical nodal curve C^∙_𝐑 and, consequently, the count of limit root bundles, solely depends on the polytope Δ^∘. These techniques were further refined in <cit.> to handle limit roots on “simple” partial blow-ups. The outcomes of the computer scan strongly indicate that the absence of vector-like exotics in the (3, 2)_1/6 representation is a highly probable scenario within the F-theory QSMs. It is essential to emphasize that this finding carries a statistical meaning, as the precise origin of the limit roots from F-theory gauge potentials has yet to be fully understood.
§.§ Results
This study builds upon previous works <cit.> and takes their techniques further. Previous research revealed that counting global sections of root bundles leads to exploring line bundle cohomology on nodal curves. Specifically, <cit.> focused on line bundle cohomology for disconnected nodal curves while <cit.> expanded the analysis to include tree-like nodal curves. In this study, we extend the investigation to line bundles on nodal curves with a non-zero first Betti number, i.e. circuits. To accomplish this, a two-step procedure is employed:
* A line bundle on a difficult nodal curve is replaced by a line bundle on a simpler nodal curve, which we refer to as a terminal circuit.
* It is demonstrated that there are only a finite number of terminal circuits for a fixed Betti number. Consequently, the terminal circuits are classified and line bundle cohomologies on this nodal curves are examined through a case-by-case analysis.
Based on the available data, we can confirm that our counts of global sections for root bundles are optimal. To achieve improved results, revisiting the QSM geometries and extracting more refined geometric information would be necessary. This includes acquiring information about the descent data of the root bundles and the precise line bundle divisor on elliptic components of the nodal curves. However, without this additional information, our computer scan <cit.> provides the best possible answer.
Having achieved the optimal outcome, we assess the probability of no vector-like exotics precisely on the quark-doublet curve. Across the union of all 33 distinct QSM families explored, this scenario is estimated to occur in at least 93.91% of all these configurations. This result strengthens the earlier findings that statistically support the likelihood of this desired physical scenario. However, it is crucial to note that these conclusions rely on the top-down origin of the root bundles, which we anticipate future research will clarify.
Estimating the likelihood of one exotic vector-like pair on the Higgs curve would be desirable, but is infeasible with our current computational techniques. However, we can use the compute Brill-Noether numbers on the quark-doublet curve to again at least some indications. Namely, our results show that this undesirable phenomenon of exactly one vector-like exotic on the quark-doublet curve occurs with probability of at most 5.17%. We view this result as a positive indication towards exactly one vector-like pair on the Higgs curve.
§.§ Outline
In <ref>, we provide a concise overview of the previous findings. For a more comprehensive understanding, we direct interested readers to the original works <cit.>. This section also serves to justify the need for our more refined techniques, which expand upon the previous results. These advanced techniques are introduced in <ref>. Of particular significance is the classification of the terminal graphs discussed in <ref>. The investigation of jumps on these terminal graphs represents a technically challenging aspect of this research work. Further details and derivations can be found in <ref> and <ref>. These techniques are then applied to the F-theory QSMs, and the implications are elaborated upon in <ref>. Finally, we conclude this work with a summary and outlook in <ref>.
§ LIMIT ROOT BUNDLES MEET F-THEORY QSMS
We begin with a very brief revision on our root bundle program. The interested reader can find much more information in the original works <cit.> and the recent summary on this program in <cit.>.
Vector-like spectra in F-theory are counted by the line bundle cohomologies of certain line bundles on the matter curves <cit.>. For the Quadrillion F-theory standard models <cit.>, we could not determine the exact line bundle whose cohomologies count the vector-like spectra. However, it was possible to derive necessary and highly non-trivial constraints for these line bundles. Our initial study <cit.> showed that these line bundles P_𝐑 must be root bundles. To elaborate more on this finding, we first recall that the F-theory QSMs <cit.> are defined in terms of toric 3-folds with K^3_B_3∈{ 6, 10, 18, 30 }. The five matter curves of these models are defined by sections s_i ∈ H^0 ( B_3, K_B_3). In <cit.>, we argued that the line bundles P_𝐑, whose cohomologies count the vector-like spectra, must necessarily satisfy the following conditions:
curve root bundle constraint
C_(3,2)_1/6 = V( s_3, s_9 ) P_(3,2)_1/6^⊗ 2 K^3_B_3 = K_(3,2)_1/6^⊗( 6 + K^3_B_3)
C_(1,2)_-1/2= V ( s_3, P_H ) P_(1,2)_-1/2^⊗ 2 K^3_B_3 = K_(1,2)_-1/2^⊗( 4 + K^3_B_3)⊗𝒪_(1,2)_-1/2( - 30 Y_1 )
C_(3,1)_-2/3 = V( s_5, s_9 ) P_(3,1)_-2/3^⊗ 2 K^3_B_3 = K_(3,1)_-2/3^⊗( 6 + K^3_B_3)
C_(3,1)_1/3 = V ( s_9, P_R ) P_(3,1)_1/3^⊗ 2 K^3_B_3 = K_(3,1)_1/3^⊗( 4 + K^3_B_3)⊗𝒪_(3,1)_1/3( - 30 Y_3 )
C_(1,1)_1 = V( s_1, s_5 ) P_(1,1)_1^⊗ 2 K^3_B_3 = K_(1,1)_1^⊗( 6 + K^3_B_3)
In this table, we make use of two polynomials: P_H = s_2 s_5^2 + s_1 ( s_1 s_9 - s_5 s_6 ) and P_R = s_3 s_5^2 + s_6 ( s_1 s_6 - s_2 s_5 ). It is important to note that the line bundles on both the Higgs curve C_(1,2)-1/2 and the curve C_(3,1)_1/3 are dependent on the Yukawa points Y_1 = V( s_3, s_5, s_9 ) and Y_3 = V( s_3, s_6, s_9 ), respectively. Furthermore, it is important to keep in mind that if two divisors D and E are linearly equivalent, i.e., D ∼ E, then n · D ∼ n · E for any integer n. However, the converse is not true, and that is why we do not cancel common factors.
For instance, P_(3,2)_1/6 must be a 2 K^3_B_3-th root of the ( 6 + K^3_B_3)-th power of K_(3,2)_1/6. Hence, the name root bundles. It can be argued that solutions to the root bundle constraints in <ref> do exist, but are not unique. This is to be contrasted with the physics' expectation that only a subset of these roots is realized from F-theory gauge potentials in the Deligne cohomology H^4_D( Y_4, ℤ(2) )). Such top-down consideration were and still are beyond our reach. In addition, there are computational limitations that stop us from enumerating the huge number of root bundles on the Higgs curve and then working out their number of global sections. To circumvent both of these shortcomings, starting from our analysis in <cit.>, we have therefore employed the following philosophy:
* Focus on F-theory QSM geometries, such that there are at least as many different F-theory gauge potentials as there are solutions to the root bundle constraint on the quark-doublet curve C_(3,2)_1/6. Then, at least statistically speaking, we could hope that every root bundle on the quark-doublet curve is realized from an F-theory gauge potential. It turned out that this condition is equivalent to h^2,1(Y_4) ≥ g where g is the genus of C_(3,2)_1/6 <cit.>.
* Subsequently, we analyze the line bundle cohomologies of all root bundles on the quark-doublet systematically in order to draw conclusions about the presence/absence of vector-like exotics.
We thus make a selection among the F-theory QSMs. To this end, we recall that these geometries arise from desingularizations of toric K3-surfaces. Such desingularizations were first studied in <cit.> and correspond to 3-dimensional, reflexive lattice polytopes Δ⊂ M_𝐑 and their polar duals Δ^∘⊂ N_𝐑 defined by ⟨Δ, Δ^∘⟩≥ -1. The complete list of such 3-dimensional polytopes is available in <cit.>. Just as in <cit.> and denote the i-th polytope in this list as Δ_i^∘⊂ N_𝐑. We denote the family of F-theory QSM geometries associated to the i-th 3-dimensional, reflexive lattice polytope Δ_i^∘ in <cit.> by B_3( Δ_i^∘ ). The different geometries in B_3( Δ_i^∘ ) are one-to-one to the fine regular star triangulations of Δ_i^∘. Some quantities are the same for a family B_3( Δ_i^∘ ), for instance K_B_3^3, h^2,1(Y_4) and the genus of C_(3,2)_1/6. It turns out that there are exactly 37 F-theory QSM families B_3( Δ_i^∘ ) which satisfy h^2,1(Y_4) ≥ g.
It thus remains to construct solutions for the root bundle constraint in <ref> on the matter curve C^∙_(3,2)_1/6 of these 37 QSM families and then to work out the line bundle cohomologies of those root bundles. Sadly, it turns out that this is a rather tough task for smooth, irreducible curves. Historic references indicating this obstacle include <cit.>. Instead of working out the exact solutions, we have therefore opted for an approximation technique, which is based on the following two key insights:
* Root bundles on nodal curves are well understood and follow a diagrammatic pattern <cit.>. We refer the interested reader to <cit.> and <cit.> for more details.
* There is a canonical deformation C_(3,2)_1/6→ C^∙_(3,2)_1/6, such that the nodal curve C^∙_(3,2)_1/6 is the same for all spaces in B_3( Δ_i^∘ ), i.e. depends only on the polytope Δ_i^∘. This we explained in large detail in <cit.>.
It is worth noting that this is a rather impressive and stunning finding! Despite there being an enormous number of F-theory QSM geometries (around 10^15), it is possible to estimate the vector-like spectra of the majority of these setups from analyzing root bundles on only 37 different nodal curves!
Despite these pleasing features of nodal curves, we eventually want to go back and determine the cohomologies of root bundles on the initial smooth, irreducible matter curve C_(3,2)_1/6. Hence, we should study how cohomologies of root bundles P^∙_𝐑 on nodal curves relate to the cohomologies of root bundles P^∙_𝐑 on the smooth irreducible curve. In principle, we can trace the roots P^∙_𝐑 along the deformation C^∙_𝐑→ C_𝐑 to find roots P_𝐑 on the original curve C_𝐑. While such a deformation can alter the number of global sections, it is known that it is known that the number of global sections is an upper semi-continuous function:
h^0( C_𝐑, P_𝐑 ) ≤ h^0( C^∙_𝐑, P^∙_𝐑 ) = χ( P_𝐑 ) + δ , δ∈ℤ_≥ 0 .
In order to find roots for which the number of sections remains constant upon deformation to the original curve, we can look for instances where equality holds. One such instance is the generic case δ=0, in which h^0( C^∙_𝐑, P^∙_𝐑 ) = χ( P_𝐑 ). This is because the number of sections is then already minimal on C^∙_𝐑 and therefore must remain constant throughout the deformation. As a result, the number of limit roots on the partial blow-ups C^∘_𝐑 of the nodal curve C^∙_𝐑 that have the generic number of global sections provides a lower bound on the number of roots P_𝐑 without vector-like exotics. Indeed, this is exactly the situation that desribes/approximates root bundles on C_(3,2)_1/6 without vector-like exotics.
In reassessing the 37 F-theory QSM families B_3( Δ_i^∘ ) with h^2,1(Y_4) ≥ g, we realized that for four of them, the canonical nodal quark-doublet curve C^∙_(3,2)_1/6 has a smooth, irreducible component whose genus is greater than one. This makes the global section counting for limit roots a lot harder, and so we decided to ignored those four QSM families. Consequently, we focused on the following 33 families of QSM geometries. Note that the numbers of fine regular star triangulations were worked out/estimated in <cit.>.
Polytope K_B_3^3 N_roots N_FRST N_FRST
Δ_8^∘ 6 12^8 3.867 × 10^13 2.828 × 10^16
Δ_4^∘ 6 12^8 3.188 × 10^11
Δ_134^∘ 6 12^8 7.538 × 10^10
Δ_128^∘, Δ_130^∘, Δ_136^∘, Δ_236^∘ 6 12^8 3.217 × 10^11
Δ_88^∘ 10 20^12 5.231 × 10^10 1.246 × 10^14
Δ_110^∘ 10 20^12 5.239 × 10^8
Δ_272^∘, Δ_274^∘ 10 20^12 3.212 × 10^12 2.481 × 10^15
Δ_387^∘ 10 20^12 6.322 × 10^10 6.790 × 10^12
Δ_798^∘, Δ_808^∘, Δ_810^∘, Δ_812^∘ 10 20^12 1.672 × 10^10 2.515 × 10^13
Δ_254^∘ 10 20^12 1.568 × 10^10
Δ_52^∘ 10 20^12 1.248 × 10^8
Δ_302^∘ 10 20^12 5.750 × 10^7
Δ_786^∘ 10 20^12 9.810 × 10^8
Δ_762^∘ 10 20^12 1.087 × 10^11 2.854 × 10^13
Δ_417^∘ 10 20^12 1.603 × 10^9
Δ_838^∘ 10 20^12 4.461 × 10^9
Δ_782^∘ 10 20^12 3.684 × 10^9
Δ_377^∘, Δ_499^∘, Δ_503^∘ 10 20^12 4.461 × 10^9
Δ_1348^∘ 10 20^12 4.285 × 10^9
Δ_882^∘, Δ_856^∘ 10 20^12 3.180 × 10^9
Δ_1340^∘ 10 20^12 4.496 × 10^9
Δ_1879^∘ 10 20^12 4.461 × 10^9
Δ_1384^∘ 10 20^12 7.040 × 10^9
We conclude this section with an illustrative example of constructing limit roots and determining their number of global sections. This example is chosen to illustrate our key advances. The interested reader may consider this example as a motivation for the more involved study of these techniques in <ref>. For this example, let us look at a nodal curve with three irreducible, smooth, rational components, whose dual graph Γ is as follows:
[baseline=(current bounding box.center)]
1;
[-, out = 45, in = 135, looseness = 0.8] (-,0) edge (,0);
[-, out = -45, in = -135, looseness = 0.8] (-,0) edge (,0);
[-, out = 0, in = -180, looseness = 0] (,0) edge (3*,0);
at (-,0) [stuff_fill_red, circle, draw, label=left:C_1] 0;
at (,0) [stuff_fill_red, circle, draw, label=above:C_2] 0;
at (3*,0) [stuff_fill_red, circle, draw, label=right:C_3] 2;
This means that the vertices represent the irreducible components and the edges correspond to the nodal singularities. On this curve, we consider a line bundle L with deg( . L |_C_i) = ( 0,0,2 ), i.e. the degree of L restricted to the i-th component is exactly the number that we display in the above graph inside of the i-th vertex. It is worth recalling that this information does not fix L uniquely. In order to uniquely specify L, we would have to specify the so-called descent data, which in this case, boils down to fixing one parameter λ∈ℂ^∗ (see <cit.> for details). This parameter specifies that the sections on C_1 and C_2 agree up to λ at one of the two nodes in which they intersect, while the sections agree in all remaining nodes.
Let us now construct limit 2nd roots of this line bundle L. Note that β_1( Γ ) = 1, so on a smooth, irreducible curve, there are 2^2 = 4 roots. Here is how to construct the corresponding limit root. First, make a binary decision for each node. Namely, we decide whether to blow this node up or not. If we do, then this amounts to putting a bi-weight 1-1 on the corresponding edge in the above graph and subtracting 1 from each of the adjacent vertices. Then, one removes the corresponding edge and tries to divide the resulting degrees by 2. If doing so leads to integer degrees on all vertices, we have encoded a limit root. In this case, this procedure leads to the following two admissible configurations:
[baseline=(current bounding box.center)]
1;
6̣;
[-, out = 45, in = 135, looseness = 0.8] (-,0) edge (,0);
[-, out = -45, in = -135, looseness = 0.8] (-,0) edge (,0);
[-, out = 0, in = -180, looseness = 0] (,0) edge (3*,0);
at (-,0) [stuff_fill_red, circle, draw, label=above:C_1] 0;
at (,0) [stuff_fill_red, circle, draw, label=above:C_2] 0;
at (3*,0) [stuff_fill_red, circle, draw, label=above:C_3] 1;
[thick,decoration=brace,mirror,raise=0.5cm,decorate] (-1.5*,0) – (3.5*,0) node [pos=0.5,anchor=north,yshift=-0.55cm] Limit root 1;
[-, out = 0, in = -180, looseness = 0] (+,̣0) edge (3*+,̣0);
at (-+,̣0) [stuff_fill_red, circle, draw, label=above:C_1] -1;
at (+,̣0) [stuff_fill_red, circle, draw, label=above:C_2] -1;
at (3*+,̣0) [stuff_fill_red, circle, draw, label=above:C_3] 1;
[thick,decoration=brace,mirror,raise=0.5cm,decorate] (-1.5*+,̣0) – (3.5*+,̣0) node [pos=0.5,anchor=north,yshift=-0.55cm] Limit root 2;
Limit root 1 is obtained by not blowing up any node, whereas limit root 2 is obtained by blowing up both nodes between C_1 and C_2. Each of these graphs correspond according to the geometric multiplicity to μ = 2^β_1(Γ) = 2^1 = 2 roots on a corresponding smooth, irreducible curve.
Now that we have enumerated the limit roots, it remains to compute the global sections. This is exactly where our new techniques appear. We begin by looking at limit root 1. Our simplification techniques for this line bundle on this nodal curve proceed as follows:
h^0 (
[baseline=(current bounding box.center)]
1;
[-, out = 45, in = 135, looseness = 0.8] (-,0) edge (,0);
[-, out = -45, in = -135, looseness = 0.8] (-,0) edge (,0);
[-, out = 0, in = -180, looseness = 0] (,0) edge (3*,0);
at (-,0) [stuff_fill_red, circle, draw] 0;
at (,0) [stuff_fill_red, circle, draw] 0;
at (3*,0) [stuff_fill_red, circle, draw] 1;
) T1= h^0 (
[baseline=(current bounding box.center)]
1;
[-, out = 45, in = 135, looseness = 0.8] (-,0) edge (,0);
[-, out = -45, in = -135, looseness = 0.8] (-,0) edge (,0);
at (-,0) [stuff_fill_red, circle, draw] 0;
at (,0) [stuff_fill_red, circle, draw] 0;
) + 1 T2= h^0 (
[baseline=(current bounding box.center)]
1;
[-,out = -30, in = 30, looseness = 6] (-,-0.2*) edge (-,+0.2*);
at (-,0) [stuff_fill_red, circle, draw] 0;
) + 1 .
The step T1 prunes a leaf while T2 removes an interior edge. We will give more details on these techniques in sections <ref> and <ref>. By following these steps, we were able to reduce the original problem to a line bundle on an arguably simpler nodal curve:
[baseline=(current bounding box.center)]
1;
[-,out = -30, in = 30, looseness = 6] (-,-0.2*) edge (-,+0.2*);
at (-,0) [stuff_fill_red, circle, draw] 0;
This last graph cannot be simplified further, which is why we call such a configuration a terminal graph. We classify these terminal graphs in <ref> and analyze their number of global sections. In this case, it is not hard to see that there is exactly one global section provided that the descent data satisfies λ = 1. This corresponds to the canonical bundle on this nodal curve. In all other cases, we have no global sections. This is why we say that this terminal graph is jumping.
We now return to the second limit root. For this limit root, we obtain along the same lines:
h^0 (
[baseline=(current bounding box.center)]
1;
[-, out = 0, in = -180, looseness = 0] (,0) edge (3*,0);
at (-,0) [stuff_fill_red, circle, draw] -1;
at (,0) [stuff_fill_red, circle, draw] -1;
at (3*,0) [stuff_fill_red, circle, draw] 1;
) T1= h^0 (
[baseline=(current bounding box.center)]
1;
at (-,0) [stuff_fill_red, circle, draw] -1;
at (,0) [stuff_fill_red, circle, draw] -1;
) + 1 .
For this limit root, there are always no global sections, irrespective of the descent data. We summarize this information as follows:
h^0 = 1 h^0 ≥ 1
2 2
(Limit root 2) (Limit root 1)
Certainly, the nodal curves encountered for the F-theory QSMs are more involved. But still, the steps to be taken are analogous to the above example. In the following section, we provide more details on the leaf pruning and removal of interior edges that eventually lead to the classification of terminal graphs in <ref>. Subsequently, we return to the F-theory QSMs in <ref>.
§ SIMPLIFICATION TECHNIQUES
§.§ Pruning leaves
Let (C, L) be a pair consisting of a nodal curve C and line bundle L. Suppose that the dual graph Γ of C contains a tree (consisting of only rational components) that is attached to exactly one component (of genus 0 or 1) at exactly one node. We put forward an algorithm that prunes leaves repeatedly until the entire tree is removed.
[Algorithmic pruning of leafs]
Suppose that Γ contains a leaf T, i.e. T is a vertex that is attached to exactly one vertex U of Γ at a nodal singularity p. If d_T = (L|_T) ≥ 0, then let C^' = C ∖ T and define a line bundle L' = L|_C' on C'. Then,
h^0(C, L) = h^0(C^', L') + d_T .
If d_T < 0, then let C^' = C ∖ T and define a new line bundle L^' = L ⊗ O_C^'(-p) on C^'. In particular, it satisfies L^'|_U = L|_U ⊗𝒪_U(-p) and L^'|_C^'∖ U = L|_C^'∖ U
Then,
h^0(C, L) = h^0(C^', L').
By repeating this procedure and pruning leaves repeatedly, we can remove subtrees of Γ. Let us explain the case where d_T < 0 further.
If U has genus 0, then replacing L with L' amounts to replacing d_U with d_U-1.
This is because line bundles on rational curves are uniquely specified by their degree. However, this is no longer true when U has genus 1. That is, non-isomorphic bundles on an elliptic curve can have the same degree and this adds ambiguity when calculating h^0. Indeed, the Riemann-Roch Theorem states that for a degree d line bundle L on a elliptic curve E,
h^0(E, L) =
d, if d ≥ 1,
0 or 1, if d = 0,
0, if d < 0.
So if the pruning leads to a degree 0 line bundle on the elliptic component, then we can only find a lower bound for h^0. For example, let d < 0 and consider the curve C and line bundle L:
[scale=0.6, baseline=(current bounding box.center)]
4.0;
(0,0) – (,0);
at (0,0) [stuff_fill_green, scale=0.6, label=below:E]1;
at (,0) [stuff_fill_red, scale=0.6, label=below:C_1]d;
The degree 1 line bundle on E is either 𝒪_E(p), where p is the node on E, or 𝒪_E(q) for some other point q ∈ E. Applying the algorithm involves pruning C_1 and demanding that sections of the line bundle on E have a zero at p. This leads to the weighted dual graph Γ^':
[scale=0.6, baseline=(current bounding box.center)]
at (0,0) [stuff_fill_green, scale=0.6, label=below:E]0;
At this stage, the line bundle on E is either 𝒪_E(p-p) = 𝒪_E, which has h^0=1, or 𝒪_E(q-p), which has h^0 = 0.
We illustrate this algorithm with a few examples. Here, genus 0 components are marked in pink while genus 1 components are marked in green. When it is possible for two vertices to be connected to each other by more than one edge, we draw two dashed edges between the two vertices.
[When the tree is based at a rational component] Let (C, L) be depicted by the following graph Γ. Suppose that Γ contains a tree, consisting of rational components C_1 to C_4, that is connected to a rational component R. Let S be the complement of the tree in Γ.
[scale=0.6, baseline=(current bounding box.center)]
4.0;
[-,out = 30, in = 150, dashed] (-2*,0) edge (-1*,0);
[-,out = -30, in = -150, dashed] (-2*,0) edge (-1*,0);
(-1*,0) – (3*,0);
at (-2*,0) [stuff_fill_connect, scale=0.6, label=below:S ∖ R]d;
at (-1*,0) [stuff_fill_red, scale=0.6, label=below:R]d_R = -2;
at (0,0) [stuff_fill_red, scale=0.6, label=below:C_1]d_1 = 0;
at (1*,0) [stuff_fill_red, scale=0.6, label=below:C_2]d_2 = 3;
at (2*,0) [stuff_fill_red, scale=0.6, label=below:C_3]d_3 = -2;
at (3*,0) [stuff_fill_red, scale=0.6, label=below:C_4]d_4 = 1;
First, prune the leaf C_4 from C_3. This leads to the following pair (C', L'):
[scale=0.6, baseline=(current bounding box.center)]
4.0;
[-,out = 30, in = 150, dashed] (-2*,0) edge (-1*,0);
[-,out = -30, in = -150, dashed] (-2*,0) edge (-1*,0);
(-1*,0) – (2*,0);
at (-2*,0) [stuff_fill_connect, scale=0.6, label=below:S ∖ R]d;
at (-1*,0) [stuff_fill_red, scale=0.6, label=below:R]d_R = -2;
at (0,0) [stuff_fill_red, scale=0.6, label=below:C_1]d_1 = 0;
at (1*,0) [stuff_fill_red, scale=0.6, label=below:C_2]d_2 = 3;
at (2*,0) [stuff_fill_red, scale=0.6, label=below:C_3]d_3 = -2;
Since d_4 ≥ 0, we have that h^0( C, L ) = h^0( C', L' ) + 1. Next, prune C_3 from C_2. This leads to the following pair (C”, L”)
[scale=0.6, baseline=(current bounding box.center)]
4.0;
[-,out = 30, in = 150, dashed] (-2*,0) edge (-1*,0);
[-,out = -30, in = -150, dashed] (-2*,0) edge (-1*,0);
(-1*,0) – (1*,0);
at (-2*,0) [stuff_fill_connect, scale=0.6, label=below:S ∖ R]d;
at (-1*,0) [stuff_fill_red, scale=0.6, label=below:R]d_R = -2;
at (0,0) [stuff_fill_red, scale=0.6, label=below:C_1]d_1 = 0;
at (1*,0) [stuff_fill_red, scale=0.6, label=below:C_2]d_2 = 2;
Note that d_2 was reduced by 1 since d_3 = -2 < 0. We have that h^0( C, L) = h^0( C”, L”) + 1. Next, prune C_2 from C_1, which leads to the following pair (C”', L”'):
[scale=0.6, baseline=(current bounding box.center)]
4.0;
[-,out = 30, in = 150, dashed] (-2*,0) edge (-1*,0);
[-,out = -30, in = -150, dashed] (-2*,0) edge (-1*,0);
(-1*,0) – (0,0);
at (-2*,0) [stuff_fill_connect, scale=0.6, label=below:S ∖ R]d;
at (-1*,0) [stuff_fill_red, scale=0.6, label=below:R]d_R = -2;
at (0,0) [stuff_fill_red, scale=0.6, label=below:C_1]d_1 = 0;
Since d_2 = 2, the algorithm tells us to increase the offset by 2. Thus, h^0( C, L ) = h^0( C”', L”') + 3. Next, prune C_1 from R and obtain the pair (C”” = S, L””).
[scale=0.6, baseline=(current bounding box.center)]
4.0;
[-,out = 30, in = 150, dashed] (-2*,0) edge (-1*,0);
[-,out = -30, in = -150, dashed] (-2*,0) edge (-1*,0);
at (-2*,0) [stuff_fill_connect, scale=0.6, label=below:S ∖ R]d;
at (-1*,0) [stuff_fill_red, scale=0.6, label=below:R]d_R = -2;
Since d_1 = 0, the offset is 0 and h^0( C, L ) = h^0( C^'''', L””) + 3.
[When the tree is based at an elliptic component] Suppose that Γ contains a tree, consisting of rational components C_1 to C_4, that is connected to an elliptic component E.
[scale=0.6, baseline=(current bounding box.center)]
4.0;
(-2*,0) – (2*,0);
at (-2*,0) [stuff_fill_green, scale=0.6, label=below:E]d;
at (-1*,0) [stuff_fill_red, scale=0.6, label=below:C_1]d_1 = -2;
at (0,0) [stuff_fill_red, scale=0.6, label=below:C_2]d_2 = 0;
at (1*,0) [stuff_fill_red, scale=0.6, label=below:C_3]d_3 = 3;
at (2*,0) [stuff_fill_red, scale=0.6, label=below:C_4]d_4 = -1;
First, prune C_4 from C_3. Since d_4 < 0, we replace d_3 with d_3-1. This leads to the pair (C', L'):
[scale=0.6, baseline=(current bounding box.center)]
4.0;
(-2*,0) – (1*,0);
at (-2*,0) [stuff_fill_green, scale=0.6, label=below:E]d;
at (-1*,0) [stuff_fill_red, scale=0.6, label=below:C_1]d_1 = -2;
at (0,0) [stuff_fill_red, scale=0.6, label=below:C_2]d_2 = 0;
at (1*,0) [stuff_fill_red, scale=0.6, label=below:C_3]d_3 = 2;
We have that h^0(C, L) = h^0(C', L'). Next, prune C_3 from C_2. This leads to the pair (C”, L”):
[scale=0.6, baseline=(current bounding box.center)]
4.0;
(-2*,0) – (0,0);
at (-2*,0) [stuff_fill_green, scale=0.6, label=below:E]d;
at (-1*,0) [stuff_fill_red, scale=0.6, label=below:C_1]d_1 = -2;
at (0,0) [stuff_fill_red, scale=0.6, label=below:C_2]d_2 = 0;
We have that h^0(C, L) = h^0(C”, L”)+2. Pruning C_2 from C_1 leads to the pair (C”', L”'):
[scale=0.6, baseline=(current bounding box.center)]
4.0;
(-2*,0) – (-1*,0);
at (-2*,0) [stuff_fill_green, scale=0.6, label=below:E]d;
at (-1*,0) [stuff_fill_red, scale=0.6, label=below:C_1]d_1 = -2;
We have that h^0(C, L) = h^0(C”', L”')+2. Next, prune the leaf C_1 from E. Since d_1 < 0, we replace d with d-1. This leaves us with the pair (C”” = E, L””):
[scale=0.6, baseline=(current bounding box.center)]
4.0;
at (-2*,0) [stuff_fill_green, scale=0.6, label=below:E]d-1;
The final formula for h^0(C””, L””) is
h^0(C””, L””) = h^0(E, L””)+2 = 2 +
d, if d ≥ 2,
0 or 1, if d = 1,
0, if d < 1.
[When the tree is based at a elliptic component] Let (C, L) be depicted by the following graph Γ. Suppose that Γ contains a tree, consisting of rational components C_1 to C_4, that is connected to an elliptic component E. Let S be the complement of the tree in Γ.
[scale=0.6, baseline=(current bounding box.center)]
4.0;
[-,out = 30, in = 150, dashed] (-2*,0) edge (-1*,0);
[-,out = -30, in = -150, dashed] (-2*,0) edge (-1*,0);
(-1*,0) – (3*,0);
at (-2*,0) [stuff_fill_connect, scale=0.6, label=below:S ∖ E]d;
at (-1*,0) [stuff_fill_green, scale=0.6, label=below:E]d_E = 1;
at (0,0) [stuff_fill_red, scale=0.6, label=below:C_1]d_1 = 0;
at (1*,0) [stuff_fill_red, scale=0.6, label=below:C_2]d_2 = -1;
at (2*,0) [stuff_fill_red, scale=0.6, label=below:C_3]d_3 = 2;
at (3*,0) [stuff_fill_red, scale=0.6, label=below:C_4]d_4 = 0;
First, prune the leaf C_4 from C_3. This leads to the following pair (C', L'):
[scale=0.6, baseline=(current bounding box.center)]
4.0;
[-,out = 30, in = 150, dashed] (-2*,0) edge (-1*,0);
[-,out = -30, in = -150, dashed] (-2*,0) edge (-1*,0);
(-1*,0) – (2*,0);
at (-2*,0) [stuff_fill_connect, scale=0.6, label=below:S ∖ E]d;
at (-1*,0) [stuff_fill_green, scale=0.6, label=below:E]d_E = 1;
at (0,0) [stuff_fill_red, scale=0.6, label=below:C_1]d_1 = 0;
at (1*,0) [stuff_fill_red, scale=0.6, label=below:C_2]d_2 = -1;
at (2*,0) [stuff_fill_red, scale=0.6, label=below:C_3]d_3 = 2;
We have that h^0( C, L ) = h^0( C', L' ). Next, prune C_3 from C_2. This leads to the pair (C”, L”):
[scale=0.6, baseline=(current bounding box.center)]
4.0;
[-,out = 30, in = 150, dashed] (-2*,0) edge (-1*,0);
[-,out = -30, in = -150, dashed] (-2*,0) edge (-1*,0);
(-1*,0) – (1*,0);
at (-2*,0) [stuff_fill_connect, scale=0.6, label=below:S ∖ E]d;
at (-1*,0) [stuff_fill_green, scale=0.6, label=below:E]d_E = 1;
at (0,0) [stuff_fill_red, scale=0.6, label=below:C_1]d_1 = 0;
at (1*,0) [stuff_fill_red, scale=0.6, label=below:C_2]d_2 = -1;
Since d_3 ≥ 0, we have that h^0( C, L) = h^0( C”, L”) + 2. Next, prune C_2 from C_1, which leads to the following pair (C”', L”'):
[scale=0.6, baseline=(current bounding box.center)]
4.0;
[-,out = 30, in = 150, dashed] (-2*,0) edge (-1*,0);
[-,out = -30, in = -150, dashed] (-2*,0) edge (-1*,0);
(-1*,0) – (0,0);
at (-2*,0) [stuff_fill_connect, scale=0.6, label=below:S ∖ E]d;
at (-1*,0) [stuff_fill_green, scale=0.6, label=below:E]d_E = 1;
at (0,0) [stuff_fill_red, scale=0.6, label=below:C_1]d_1 = -1;
Since d_2 <0, we reduce d_1 by 1 and we have h^0( C, L ) = h^0( C”', L”') + 2. Next, prune C_1 from E and obtain the pair (C”” = S, L””).
[scale=0.6, baseline=(current bounding box.center)]
4.0;
[-,out = 30, in = 150, dashed] (-2*,0) edge (-1*,0);
[-,out = -30, in = -150, dashed] (-2*,0) edge (-1*,0);
at (-2*,0) [stuff_fill_connect, scale=0.6, label=below:S ∖ E]d;
at (-1*,0) [stuff_fill_green, scale=0.6, label=below:E]d_E = 0;
Since d_1 < 0, we reduce d_E by 1 and we have h^0( C, L ) = h^0( C^'''', L””) + 2.
§.§ Removing interior edges
We present our second major technique that simplifies the pair (C, L).
Suppose that a rational component C_mid of C is connected to two other components C_1 and C_2 (not necessarily distinct) via a single edge each.
We call C_mid and the two half-edges emanating from it an interior edge.
Suppose that C contains an interior edge C_mid that meets C_1 at node p_1 and C_2 at node p_2. Let d = (L|_C_mid).
* If d > 0, let C' = C ∖ C_mid and L' = L|_C'. Then, h^0(C, L) = h^0(C', L') + d-1.
* If d < 0, let C' = C ∖ C_mid and define a new line bundle L' = L ⊗ O_C^'(-p_1-p_2) on C'. Then, h^0(C, L) = h^0(C', L').
* If d =0, form a new curve C^' by removing C_mid from C and joining the remaining half-edges to form an edge. This gives rise to a map π: C → C^' that collapses the interior edge to the new node. Define the sheaf L' = π_*(L) on C', which is a line bundle because d=0. Then, h^0(C, L) = h^0(C', L').
Again, if C_i is rational, then twisting L_C_i by 𝒪_C_i( - p_i ) in the algorithm amounts to decreasing (L|_C_i) by 1. We illustrate the different cases of the algorithm in the following figures. We mark the vertex orange if its genus is either 0 or 1.
[Case 1: d > 0]
Consider the pair (C, L) given by the following graph and let S = C ∖ (C_1 ∪ C_mid∪ C_2). Denoting d_i = (L|_C_i), we have that
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = -35, in = 180] (-2*,0) edge (2*,-1*);
[-,out = -145, in = 0] (6*,0) edge (2*,-1*);
[-,out = 10, in = 170, dashed] (-2*,0) edge (2*,0);
[-,out = -10, in = -170, dashed] (-2*,0) edge (2*,0);
[-,out = 10, in = 170, dashed] (2*,0) edge (6*,0);
[-,out = -10, in = -170, dashed] (2*,0) edge (6*,0);
at (-2*,0) [stuff_fill_connect, scale=0.6, label=above:C_1]d_1;
at (2*,0) [stuff_fill_connect, scale=0.6, label=above:S];
at (6*,0) [stuff_fill_connect, scale=0.6, label=above:C_2]d_2;
at (2*,-1*) [stuff_fill_red, scale=0.6, label=below:C_mid]d > 0;
By the algorithm, we define a new pair (C' = C ∖ C_mid, L' = L|_C'), which is given by the following graph:
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = 10, in = 170, dashed] (-2*,0) edge (2*,0);
[-,out = -10, in = -170, dashed] (-2*,0) edge (2*,0);
[-,out = 10, in = 170, dashed] (2*,0) edge (6*,0);
[-,out = -10, in = -170, dashed] (2*,0) edge (6*,0);
at (-2*,0) [stuff_fill_connect, scale=0.6, label=above:C_1]d_1;
at (2*,0) [stuff_fill_connect, scale=0.6, label=above:S];
at (6*,0) [stuff_fill_connect, scale=0.6, label=above:C_2]d_2;
We have that h^0(C, L) = h^0(C', L') + d-1.
[Case 2: d < 0] Consider the pair (C, L) given by the following graph and let S = C ∖ (C_1 ∪ C_mid∪ C_2). Denoting d_i = (L|_C_i), we have that
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = -35, in = 180] (-2*,0) edge (2*,-1*);
[-,out = -145, in = 0] (6*,0) edge (2*,-1*);
[-,out = 10, in = 170, dashed] (-2*,0) edge (2*,0);
[-,out = -10, in = -170, dashed] (-2*,0) edge (2*,0);
[-,out = 10, in = 170, dashed] (2*,0) edge (6*,0);
[-,out = -10, in = -170, dashed] (2*,0) edge (6*,0);
at (-2*,0) [stuff_fill_connect, scale=0.6, label=above:C_1]d_1;
at (2*,0) [stuff_fill_connect, scale=0.6, label=above:S];
at (6*,0) [stuff_fill_connect, scale=0.6, label=above:C_2]d_2;
at (2*,-1*) [stuff_fill_red, scale=0.6, label=below:C_mid]d < 0;
By the above algorithm, we define a new pair (C', L') by the following graph:
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = 10, in = 170, dashed] (-2*,0) edge (2*,0);
[-,out = -10, in = -170, dashed] (-2*,0) edge (2*,0);
[-,out = 10, in = 170, dashed] (2*,0) edge (6*,0);
[-,out = -10, in = -170, dashed] (2*,0) edge (6*,0);
at (-2*,0) [stuff_fill_connect, scale=0.6, label=above:C_1]d_1-1;
at (2*,0) [stuff_fill_connect, scale=0.6, label=above:S];
at (6*,0) [stuff_fill_connect, scale=0.6, label=above:C_2]d_2-1;
Then, h^0(C, L) = h^0(C', L').
[Case 3: d = 0]
Consider the pair (C, L) given by the following graph and let S = C ∖ (C_1 ∪ C_mid∪ C_2). Denoting d_i = (L|_C_i), we have that
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = -35, in = 180] (-2*,0) edge (2*,-1*);
[-,out = -145, in = 0] (6*,0) edge (2*,-1*);
[-,out = 10, in = 170, dashed] (-2*,0) edge (2*,0);
[-,out = -10, in = -170, dashed] (-2*,0) edge (2*,0);
[-,out = 10, in = 170, dashed] (2*,0) edge (6*,0);
[-,out = -10, in = -170, dashed] (2*,0) edge (6*,0);
at (-2*,0) [stuff_fill_connect, scale=0.6, label=above:C_1]d_1;
at (2*,0) [stuff_fill_connect, scale=0.6, label=above:S];
at (6*,0) [stuff_fill_connect, scale=0.6, label=above:C_2]d_2;
at (2*,-1*) [stuff_fill_red, scale=0.6, label=below:C_mid]d =0;
By the algorithm, define a new pair (C', L') given by the following graph:
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = 10, in = 170, dashed] (-2*,0) edge (2*,0);
[-,out = -10, in = -170, dashed] (-2*,0) edge (2*,0);
[-,out = 10, in = 170, dashed] (2*,0) edge (6*,0);
[-,out = -10, in = -170, dashed] (2*,0) edge (6*,0); [-,out = -35, in = -145] (-2*,0) edge (6*,0);
at (-2*,0) [stuff_fill_connect, scale=0.6, label=above:C_1]d_1;
at (2*,0) [stuff_fill_connect, scale=0.6, label=above:S];
at (6*,0) [stuff_fill_connect, scale=0.6, label=above:C_2]d_2;
Then, h^0(C, L) = h^0(C', L'),
We will now apply the algorithm to a few examples.
Consider the pair (C, L) given by the following graph:
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = -35, in = 180] (-2*,0) edge (2*,-1*);
[-,out = -145, in = 0] (6*,0) edge (2*,-1*);
[-,out = 10, in = 170, dashed] (-2*,0) edge (2*,0);
[-,out = -10, in = -170, dashed] (-2*,0) edge (2*,0);
[-,out = 10, in = 170, dashed] (2*,0) edge (6*,0);
[-,out = -10, in = -170, dashed] (2*,0) edge (6*,0);
at (-2*,0) [stuff_fill_green, scale=0.6, label=above:C_1]1;
at (2*,0) [stuff_fill_connect, scale=0.6, label=above:S];
at (6*,0) [stuff_fill_red, scale=0.6, label=above:C_2]1;
at (2*,-1*) [stuff_fill_red, scale=0.6, label=below:C_mid]2;
Since (L|_C_mid) > 0, the algorithm tells us to form the new pair (C', L') given by the graph:
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = 10, in = 170, dashed] (-2*,0) edge (2*,0);
[-,out = -10, in = -170, dashed] (-2*,0) edge (2*,0);
[-,out = 10, in = 170, dashed] (2*,0) edge (6*,0);
[-,out = -10, in = -170, dashed] (2*,0) edge (6*,0);
at (-2*,0) [stuff_fill_green, scale=0.6, label=above:E]1;
at (2*,0) [stuff_fill_connect, scale=0.6, label=above:S];
at (6*,0) [stuff_fill_red, scale=0.6, label=above:F]1;
Then, h^0(C, L) = h^0(C', L') + 2-1= h^0(C', L') + 1.
Consider the pair (C, L) given by the following graph:
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = -35, in = 180] (-2*,0) edge (2*,-1*);
[-,out = -145, in = 0] (6*,0) edge (2*,-1*);
[-,out = 10, in = 170, dashed] (-2*,0) edge (2*,0);
[-,out = -10, in = -170, dashed] (-2*,0) edge (2*,0);
[-,out = 10, in = 170, dashed] (2*,0) edge (6*,0);
[-,out = -10, in = -170, dashed] (2*,0) edge (6*,0);
at (-2*,0) [stuff_fill_green, scale=0.6, label=above:C_1]1;
at (2*,0) [stuff_fill_connect, scale=0.6, label=above:S];
at (6*,0) [stuff_fill_green, scale=0.6, label=above:C_2]1;
at (2*,-1*) [stuff_fill_red, scale=0.6, label=below:C_mid]-2;
Since (C_mid) < 0, the algorithm tells us to remove C_mid, adjust the degrees and form the new pair (C', L'):
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = 10, in = 170, dashed] (-2*,0) edge (2*,0);
[-,out = -10, in = -170, dashed] (-2*,0) edge (2*,0);
[-,out = 10, in = 170, dashed] (2*,0) edge (6*,0);
[-,out = -10, in = -170, dashed] (2*,0) edge (6*,0);
at (-2*,0) [stuff_fill_green, scale=0.6, label=above:C_1]0;
at (2*,0) [stuff_fill_connect, scale=0.6, label=above:S];
at (6*,0) [stuff_fill_green, scale=0.6, label=above:C_2]0;
Then, h^0(C, L) = h^0(C', L'). Since there are degree 0 bundles on C_1 and C_2, we can only compute the lower bound of h^0.
Consider the pair (C, L) given by the following graph:
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = -35, in = 180] (-2*,0) edge (2*,-1*);
[-,out = -145, in = 0] (6*,0) edge (2*,-1*);
[-,out = 10, in = 170, dashed] (-2*,0) edge (2*,0);
[-,out = -10, in = -170, dashed] (-2*,0) edge (2*,0);
(2*,0) – (6*,0);
at (-2*,0) [stuff_fill_green, scale=0.6, label=above:C_1]1;
at (2*,0) [stuff_fill_red, scale=0.6, label=above:S]1;
at (6*,0) [stuff_fill_red, scale=0.6, label=above:C_3]0;
at (2*,-1*) [stuff_fill_red, scale=0.6, label=below:C_2]2;
We can first remove C_3 using the algorithm. Since (L|_C_3)= 0, we remove C_2 and join the half-edges to produce a new pair (C', L'):
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = 10, in = 170, dashed] (-2*,0) edge (2*,0);
[-,out = -10, in = -170, dashed] (-2*,0) edge (2*,0);
(2*, 0) – (6*, 0);
[-,out = -35, in = -145] (-2*,0) edge (6*,0);
at (-2*,0) [stuff_fill_green, scale=0.6, label=above:C_1]1;
at (2*,0) [stuff_fill_red, scale=0.6, label=above:S]1;
at (6*,0) [stuff_fill_red, scale=0.6, label=above:C_2]2;
Then, h^0(C, L) = h^0(C', L'). To remove C_2, we apply the algorithm to form a new pair (C”, L”):
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = 10, in = 170, dashed] (-2*,0) edge (2*,0);
[-,out = -10, in = -170, dashed] (-2*,0) edge (2*,0);
at (-2*,0) [stuff_fill_green, scale=0.6, label=above:C_1]1;
at (2*,0) [stuff_fill_red, scale=0.6, label=above:S]1;
Then, h^0(C, L) = h^0(C”, L”) + 1.
Alternatively, we could remove C_2 first. This involves forming a new curve (C', L'):
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = 10, in = 170, dashed] (-2*,0) edge (2*,0);
[-,out = -10, in = -170, dashed] (-2*,0) edge (2*,0);
(2*, 0) – (6*, 0);
at (-2*,0) [stuff_fill_green, scale=0.6, label=above:C_1]1;
at (2*,0) [stuff_fill_red, scale=0.6, label=above:S]1;
at (6*,0) [stuff_fill_red, scale=0.6, label=above:C_3]0;
Then, h^0(C, L) = h^0(C', L') + 1. Next, we view C_3 as a leaf and prune it by forming the new pair (C”, L”):
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = 10, in = 170, dashed] (-2*,0) edge (2*,0);
[-,out = -10, in = -170, dashed] (-2*,0) edge (2*,0);
at (-2*,0) [stuff_fill_green, scale=0.6, label=above:C_1]1;
at (2*,0) [stuff_fill_red, scale=0.6, label=above:S]1;
Then, h^0(C, L) = h^0(C”, L”) + 1. Note that (C”, L”) = (C”, L”). So, both procedures conclude with the same result.
We can also apply this to remove interior edges connected to the same curve component.
[When C_1=C_2]
Consider the pair (C, L) given by the following graph:
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = -25, in = -155] (-2*,0) edge (4*,0);
[-,out = 25, in = 155] (-2*,0) edge (4*,0);
at (-2*,0) [stuff_fill_green, scale=0.6, label=above:C_1]d_1;
at (4*,0) [stuff_fill_red, scale=0.6, label=above:C_mid]d;
If d = 0, the algorithm tells us to remove C_mid and join the half-edges to form the new pair (C', L'):
[scale=0.6, baseline=(current bounding box.center)]
2.0;
[-,out = -30, in = 30, looseness = 5] (-2*,-0.2*) edge (-2*,+0.2*);
at (-2*,0) [stuff_fill_green, scale=0.6, label=above:C_1]d_1;
Then h^0(C, L) = h^0(C', L').
If d < 0, the algorithm tells us to remove C_mid and adjust the degrees on C_1 in the following way to obtain the new pair (C', L'):
[scale=0.6, baseline=(current bounding box.center)]
2.0;
at (-2*,0) [stuff_fill_green, scale=0.6, label=above:C_1]d_1-2;
By Riemann-Roch we have that:
h^0(C, L) = h^0(C', L') =
d_1-2, if d_1 > 2,
0 or 1, if d_1 = 2,
0, if d_1 < 2.
To explain the d_1 = 2 case further, let p and q be the nodes on C_1.
Applying the algorithm involves demanding that the line bundle sections on C_1 vanish at p and q.
If L|_C_1≅𝒪_C_1(p+q), then the algorithm results in the line bundle L' = 𝒪_C_1(p+q-p-q) = 𝒪_C_1, which has h^0(C', L')=1. Otherwise, it is a non-trivial, degree 0 line bundle on C_1 and h^0(C', L') = 0.
If d > 0, we remove C_mid and obtain the new pair (C', L'). The algorithm states that h^0(C, L) = h^0(C', L') + d-1.
[scale=0.6, baseline=(current bounding box.center)]
2.0;
at (-2*,0) [stuff_fill_green, scale=0.6, label=above:C_1]d_1;
§.§.§ Proofs of interior edge removal algorithm
In this section, we present a case-by-case proof of the interior-edge-removal techniques presented earlier. Some of these results could be proved more efficiently using techniques from algebraic geometry but we choose to leave the elementary proofs for accessibility. Suppose that a pair (C, L) contains an interior edge C_mid between two components C_1 and C_2. Denote d = (L|_C_mid) and d_i = (L|_C_i). Since PGL_2 acts on ℙ^1 transitively, we may assume that C_mid meets C_1 and C_2 at the points [1: 0] and [0: 1] on C_mid respectively.
§.§.§ Case 1: d > 0.
If C' = C ∖ C_mid and L' = L|_C', then h^0(C, L) = h^0(C', L') + d-1.
Let n_1, n_2 be the nodes at which C_mid is attached to C_1 and C_2. Observe that we have the following:
h^0(C, L) = h^0(C' ∪ C_mid, L)
≥ h^0(C', L') + h^0(C_mid, L|_C_mid) - h^0({n_α, n_β}, L|_{n_α, n_β})
= h^0(C', L') + d- 1.
To prove that equality is achieved, we need to show that the nodes impose exactly two independent conditions. Let [z: w] be the coordinates on C_mid≅ℙ^1. Then, any section s ∈ H^0(C_mid, L) is of the form
s([z:w]) = ∑^d_i=0γ_i z^iw^d-i, γ_i ∈ℂ.
Let s_1 ∈ H^0(C_1, L|_C_1) and s_2 ∈ H^0(C_2, L|_C_2) be arbitrary sections. Then, s coincides with s_1 at the node [1:0] and s coincies with s_2 at the node [0:1] up to descent data, i.e.
γ_0 = s([1:0]) = λ_1 s_1([1:0]), γ_d = r=s([0:1]) = λ_2 s_2([0:1]) ,
where λ_1, λ_2 ∈ℂ^* are descent data parameters assigned at the nodes. These equations are well-defined in the sense that scaling the intersection points on ℙ^1 does not change h^0(C, L). Indeed, the basis sections of H^0(C, L) will differ by a PGL_2(ℂ)-transformation under such scaling. Hence, these euqations are two independent conditions imposed onto H^0(C_mid, L|_C_mid). So, the result follows.
§.§.§ Case 2: d < 0.
Let (C', L') be the pair defined as in Algorithm <ref>. Then, h^0(C, L) = h^0(C', L').
Suppose d_i = (L|_C_i) > 0 and p_i be the nodes on C_i for i = 1,2. Then, sections s_1 ∈ H^0(C_1, L|_C_1) and s_2 ∈ H^0(C_2, L|_C_2) satisfy the following gluing conditions at the nodes regardless of the descent data:
s_1(p_1) = 0 , s_2(p_2) = 0 .
Demanding s_i to vanish at p_i amounts to twisting the line bundle on C_i by O_C_i(-p_i). Hence, the result h^0(C, L) = h^0(C', L') follows for d_i > 0.
If one of the d_i is negative, then the zero section on the corresponding component C_i glues to the zero section on C_mid uniquely. Since d_i < 0, twisting the line bundle on C_i still results in a negative degree d_i - 1 < 0. Hence, h^0(C_i, L'|_C_i) = h^0(C_i, L|_C_i) = 0 and the result follows.
§.§.§ Case 3: d = 0.
Suppose that C_mid met C_1 at the node p_1 ∈ C_1 and C_2 at the node p_2 ∈ C_2. Remove C_mid and join the half-edges together so that C_1 meets C_2 at p_1 ∈ C_1 and p_2 ∈ C_2. is gives rise to a map π: C → C^' that collapses the interior edge to the new node. Define the sheaf L' = π_*(L) on C', which is a line bundle because d=0. Then, h^0(C, L) = h^0(C', L').
Since d is zero, H^0(C_mid, L|_C_mid) is spanned by one constant section γ.
Suppose that sections over C_1 (resp. C_2) is glued to sections over C_mid with descent data 1 ∈ℂ^* (resp. λ).
Any section s_1 ∈ H^0(C_1, L|_C_1) and s_2 ∈ H^0(C_2, L|_C_2) satisfy s_1(p_1)= γ and s_2(p_2) = λγ = λ s_1(p_1). This shows that h^0(C, L) = h^0(C', L').
§.§ Terminal graphs
We call a connected graph Γ that cannot be reduced further by our techniques (pruning external trees and removing internal edges) terminal.
§.§.§ Rational circuits
A rational circuit is a terminal graph whose components are all rational.
The only rational circuit Γ with β_1( Γ ) = 0 consists of a single vertex and no edges, i.e. is a rational curve.
If Γ is a rational circuit with 1 < V vertices and first Betti number β_1( Γ ). Then
V ≤ 2 β_1( Γ ) - 2 .
Recall that the first Betti number of a graph is given by
β_1( Γ ) = E + C - V ,
where E is the number of edges, C the number of connected components and V the number of vertices. We assume that Γ is a rational circuit. So, by definition, Γ is connected and C = 1. We see that
E = β_1( Γ ) + V - 1 .
Assume that there are at least two (rational) vertices V in Γ. Such a vertex can be removed by our techniques if there are less than three edges connecting to it. Otherwise, this vertex either belongs to a tree or is an internal edge. So in order for the graph Γ to be terminal, we must have at least 3 edges connecting to each vertex:
3V/2≤ E = β_1( Γ ) + V - 1 .
Consequently, we conclude that for a fixed first Betti number β_1( Γ ), a rational circuit has a number of vertices V subject to the condition V ≤ 2 β_1( Γ ) - 2.
Let us consider rational circuits Γ with β_1( Γ ) = 1 and 1 < V. Then, the above bound states:
V ≤ 2 β_1( Γ ) - 2 = 0 .
Consequently, there are no such rational circuits. Rather, the only rational circuits Γ with β_1( Γ ) = 1 satisfy V = 1 and E = 1. Consequently, the only rational circuit with β_1( Γ ) = 1 is as follows:
[scale=0.6, baseline=(current bounding box.center)]
2.0;
[-,out = -30, in = 30, looseness = 5] (-2*,-0.2*) edge (-2*,+0.2*);
at (-2*,0) [stuff_fill_red, scale=0.6]d;
Observe that each rational circuit Γ describes, and is determined by, a family of stable curves of arithmetic genus β_1 ( Γ). Since the Deligne-Mumford compactified moduli space classifying stable curves of fixed genus has finitely many strata, we can conclude the following.
The number of rational circuits with fixed first Betti number β_1( Γ ) is finite.
For β_1( Γ ) ∈{ 0, 1, 2, 3 }, the rational circuits are as follows:
* β_1( Γ ) = 0:
[scale=0.6, baseline=(current bounding box.center)]
2.0;
at (-2*,0) [stuff_fill_red, scale=0.6]d;
* β_1( Γ ) = 1:
[scale=0.6, baseline=(current bounding box.center)]
2.0;
[-,out = -30, in = 30, looseness = 5] (-2*,-0.2*) edge (-2*,+0.2*);
at (-2*,0) [stuff_fill_red, scale=0.6]d;
[thick, decorate, decoration = calligraphic brace] (-1.4*,-1) – (-2.2*,-1) node[pos=0.25*,below=6pt]G_1;
* β_1( Γ ) = 2:
[scale=0.6, baseline=(current bounding box.center)]
2.0;
[-,out = -20, in = 20, looseness = 3] (-1.9*,-0.2*) edge (-1.9*,+0.2*);
[-,out = -160, in = 160, looseness = 3] (-2.1*,-0.2*) edge (-2.1*,+0.2*);
at (-2*,0) [stuff_fill_red, scale=0.6]d;
[thick, decorate, decoration = calligraphic brace] (-1.4*,-1) – (-2.6*,-1) node[pos=0.25*,below=6pt]G_2;
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = 45, in = 135] (-2*,0) edge (+2*,0);
[-,out = 0, in = 180] (-2*,0) edge (+2*,0);
[-,out = -45, in = -135] (-2*,0) edge (+2*,0);
at (-2*,0) [stuff_fill_red, scale=0.6]d_α;
at (+2*,0) [stuff_fill_red, scale=0.6]d_β;
[thick, decorate, decoration = calligraphic brace] (2.6*,-1) – (-2.6*,-1) node[pos=0.5*,below=6pt]G_3;
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = -150, in = 150, looseness = 10] (-2*,-0.2*) edge (-2*,+0.2*);
[-,out = -30, in = 30, looseness = 10] (2*,-0.2*) edge (2*,+0.2*);
[-,out = 180, in = 0] (-2*,0) edge (2*,+0);
at (-2*,0) [stuff_fill_red, scale=0.6]d_α;
at (2*,0) [stuff_fill_red, scale=0.6]d_β;
[thick, decorate, decoration = calligraphic brace] (2.6*,-1) – (-2.6*,-1) node[pos=0.5*,below=6pt]G_4;
* β_1( Γ ) = 3:
[scale=0.6, baseline=(current bounding box.center)]
2.0;
[-,out = -20, in = 20, looseness = 8] (0.2*,-0.1*) edge (0.2*,+0.1*);
[-,out = -90, in = -90, looseness = 8] (-0.1*,-0.1*) edge (0.1*,-0.1*);
[-,out = 90, in =90, looseness = 8] (-0.1*,0.1*) edge (0.1*,0.1*);
at (0,0) [stuff_fill_red, scale=0.6]d;
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = 45, in = 135] (-2*,0) edge (+2*,0);
[-,out = 20, in = 160] (-2*,0) edge (+2*,0);
[-,out = -20, in = -160] (-2*,0) edge (+2*,0);
[-,out = -45, in = -135] (-2*,0) edge (+2*,0);
at (-2*,0) [stuff_fill_red, scale=0.6]d_α;
at (+2*,0) [stuff_fill_red, scale=0.6]d_β;
[thick, decorate, decoration = calligraphic brace] (2.6*,-1) – (-2.6*,-1) node[pos=0.5*,below=6pt]G_5;
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = -150, in = 150, looseness = 8] (-2*,-0.2*) edge (-2*,+0.2*);
[-,out = 45, in = 135] (-2*,0) edge (+2*,0);
[-,out = 0, in = 180] (-2*,0) edge (+2*,0);
[-,out = -45, in = -135] (-2*,0) edge (+2*,0);
at (-2*,0) [stuff_fill_red, scale=0.6]d_α;
at (+2*,0) [stuff_fill_red, scale=0.6]d_β;
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = -150, in = 150, looseness = 8] (-2*,-0.2*) edge (-2*,+0.2*);
[-,out = 45, in = 135] (-2*,0) edge (+2*,0);
[-,out = -45, in = -135] (-2*,0) edge (+2*,0);
[-,out = -30, in = 30, looseness = 8] (2*,-0.2*) edge (2*,+0.2*);
at (-2*,0) [stuff_fill_red, scale=0.6]d_α;
at (+2*,0) [stuff_fill_red, scale=0.6]d_β;
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = 45, in = 135] (-2*,0) edge (2*,0);
[-,out = -45, in = -135] (-2*,0) edge (2*,0);
[-,out = 45, in = 135] (2*,0) edge (6*,0);
[-,out = -45, in = -135] (2*,0) edge (6*,0);
[-,out = 90, in = 90] (-2*,0) edge (6*,0);
at (-2*,0) [stuff_fill_red, scale=0.6]d_α;
at (+2*,0) [stuff_fill_red, scale=0.6]d_β;
at (+6*,0) [stuff_fill_red, scale=0.6]d_γ;
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = 0, in = 180] (-2*,0) edge (2*,0);
[-,out = -45, in = -135] (-2*,0) edge (2*,0);
[-,out = 0, in = 180] (2*,0) edge (6*,0);
[-,out = 0, in = 180] (6*,0) edge (10,0);
[-,out = -45, in = -135] (6*,0) edge (10,0);
[-,out = 40, in = 140] (-2*,0) edge (10,0);
at (-2*,0) [stuff_fill_red, scale=0.6]d_α;
at (+2*,0) [stuff_fill_red, scale=0.6]d_β;
at (+6*,0) [stuff_fill_red, scale=0.6]d_γ;
at (+10,0) [stuff_fill_red, scale=0.6]d_δ;
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = -160, in = 160, looseness = 8] (1.8*,-0.2*) edge (1.8*,+0.2*);
[-,out = 0, in = 180] (2*,0) edge (6*,0);
[-,out = 45, in = 135] (6*,0) edge (10,0);
[-,out = -45, in = -135] (6*,0) edge (10,0);
[-,out = -20, in = 20, looseness = 8] (10.2*,-0.2*) edge (10.2*,+0.2*);
at (+2*,0) [stuff_fill_red, scale=0.6]d_β;
at (+6*,0) [stuff_fill_red, scale=0.6]d_γ;
at (+10,0) [stuff_fill_red, scale=0.6]d_δ;
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = -160, in = 160, looseness = 8] (1.8*,-0.2*) edge (1.8*,+0.2*);
[-,out = 0, in = 180] (2*,0) edge (6*,0);
[-,out = -90, in = -90, looseness = 8] (5.8*,-0.1*) edge (6.2*,-0.1*);
[-,out = 90, in =90, looseness = 8] (5.8*,0.1*) edge (6.2*,0.1*);
at (+2*,0) [stuff_fill_red, scale=0.6]d_β;
at (+6*,0) [stuff_fill_red, scale=0.6]d_γ;
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = 40, in = 140, looseness = 10] (-1.85*,0.2*) edge (-2.15*,+0.2*);
[-,out = 40, in = 140, looseness = 10] (2.15*,0.2*) edge (1.85*,+0.2*);
[-,out = 40, in = 140, looseness = 10] (6.15*,0.2*) edge (5.85*,+0.2*);
[-,out = 0, in = 180] (-2*,0) edge (6*,+0);
at (-2*,0) [stuff_fill_red, scale=0.6]d_α;
at (2*,0) [stuff_fill_red, scale=0.6]d_β;
at (6*,0) [stuff_fill_red, scale=0.6]d_γ;
A simple computer scan shows that for the QSM applications in this paper, only the graphs G_1, G_2, G_3, G_4, G_5 appear. We may thus wonder if all line bundles on these curves with the prescribed degrees have the same number of global sections, or if there are exceptions (a.k.a. jumps).
Consider a rational circuit Γ. By definition, this is a terminal graph whose components are all rational. In addition, by our prescription, for each rational component/vertex V_i of Γ, there is an integer d_i which encodes a line bundle L_i = 𝒪_V_i( d_i ) ≅𝒪_ℙ^1( d_i ). Altogether, this defines a family ℱ of line bundles on the nodal curve C^∙ whose dual graph is Γ, such that for every L ∈ℱ and every vertex V_i of Γ:
. L |_V_i = L_i = 𝒪_V_i( d_i ) ≅𝒪_ℙ^1( d_i ) .
We say that Γ is non-jumping if h^0( C^∙, L ) is the same for all L ∈ℱ. Otherwise, we say that Γ is jumping.
For our applications to the Quadrillion Standard models, we must tell under what conditions on the degree d_i, the rational circuits G_1, G_2, G_3, G_4 and G_5 are jumping.
The rational circuit G_1 is jumping iff d = 0:
[scale=0.6, baseline=(current bounding box.center)]
2.0;
[-,out = -30, in = 30, looseness = 5] (-2*,-0.2*) edge (-2*,+0.2*);
at (-2*,0) [stuff_fill_red, scale=0.6]d;
If d < 0, then h^0( G_1, L ) = 0 for all L ∈ℱ. Next, consider d = 0. Then the sections on the ℙ^1 are constant. Provided that the descent data is 1, then any constant section glues across the node and we have h^0( G_1, L ) = 1. However, if the descent data satisfies λ≠ 1, then only the section identically zero glues across the node an h^0( G_1, L ) = 0. So for d = 0, G_1 is jumping. Finally, consider d > 0. Pick [x:y] as homogeneous coordinates of V_1 = ℙ^1 and position the nodes (my means of an SL(2, ℂ) transformation) at [1:0] and 0:1]. Then the most general section φ∈ H^0( V_1, . L |_V_1 ) is of the form
φ = ∑_i = 0^dα_i x^i y^d-i .
This section is subject to the gluing condition α_0 = λα_d. Hence, h^0( G_1, L ) = d for all L ∈ℱ.
By similar arguments, one can identify exactly for what line bundle degreees G_2, G_3, G_4 and G_5 are jumping. Specifically, one finds the following:
* G_2 is jumping iff 0 ≤ d ≤ 2.
* G_3 is jumping iff 0 ≤ d_α, d_β≤ 1.
* G_4 is jumping iff one of the following applies:
* d_α < 0 and d_β = 1,
* d_α = 0 and d_β≥ 0,
* d_α = 1 and d_β < 0,
* d_α≥ 0 and d_β = 0,
* d_α = d_β = 1.
* G_5 is jumping iff 0 ≤ d_α, d_β≤ 2.
We provide details in <ref>.
We notice an interesting pattern. The canonical degree typically appears to be a sort of “tipping” points beyond which there cannot be any jumps. For a smooth curve, this is indeed the known pattern by Serre duality. However, we also see exceptions to this rule. For instance, for the graph G_4, there is a jump for multidegree (d,0) for all d ≥ 0. Hence, in particular for any d larger than 2, i.e. larger than the degree of the canonical bundle of a smooth curve with genus g = 2.
§.§.§ Elliptic circuits
We call a terminal graph with one elliptic component and otherwise only rational components, an elliptic circuit.
The key difference to the previous section is that our simplification steps never remove an elliptic curve. So there is no requirement for the elliptic curve to have at least three edges leading off.
The number of connected elliptic circuits is finite.
For β_1( Γ ) ∈{ 0, 1, 2 }, the connected elliptic circuits are as follows:
* β_1( Γ ) = 0:
[scale=0.6, baseline=(current bounding box.center)]
2.0;
at (-2*,0) [stuff_fill_green, scale=0.6]d;
[thick, decorate, decoration = calligraphic brace] (-1.8*,-0.6) – (-2.2*,-0.6) node[pos=0.25*,below=6pt]G_6;
* β_1( Γ ) = 1:
[scale=0.6, baseline=(current bounding box.center)]
2.0;
[-,out = -30, in = 30, looseness = 5] (-2*,-0.2*) edge (-2*,+0.2*);
at (-2*,0) [stuff_fill_green, scale=0.6]d;
[thick, decorate, decoration = calligraphic brace] (-1.4*,-1) – (-2.2*,-1) node[pos=0.25*,below=6pt]G_7;
[scale=0.6, baseline=(current bounding box.center)]
2.0;
[-,out = 0, in = 180] (-3*,0) edge (-2*,0);
[-,out = -30, in = 30, looseness = 5] (-2*,-0.2*) edge (-2*,+0.2*);
at (-3*,0) [stuff_fill_green, scale=0.6]d_α;
at (-2*,0) [stuff_fill_red, scale=0.6]d_β;
[thick, decorate, decoration = calligraphic brace] (-1.4*,-1) – (-3.2*,-1) node[pos=0.25*,below=6pt]G_8;
* β_1( Γ ) = 2:
[scale=0.6, baseline=(current bounding box.center)]
2.0;
[-,out = -30, in = 30, looseness = 3] (-1.9*,-0.2*) edge (-1.9*,+0.2*);
[-,out = -150, in = 150, looseness = 3] (-2.1*,-0.2*) edge (-2.1*,+0.2*);
at (-2*,0) [stuff_fill_green, scale=0.6]d;
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = 45, in = 135] (-2*,0) edge (+2*,0);
[-,out = 0, in = 180] (-2*,0) edge (+2*,0);
[-,out = -45, in = -135] (-2*,0) edge (+2*,0);
at (-2*,0) [stuff_fill_green, scale=0.6]d_α;
at (+2*,0) [stuff_fill_red, scale=0.6]d_β;
[thick, decorate, decoration = calligraphic brace] (2.6*,-1) – (-2.6*,-1) node[pos=0.5*,below=6pt]G_9;
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = -30, in = 30, looseness = 10] (2*,-0.2*) edge (2*,+0.2*);
[-,out = -30, in = -150] (-2*,0) edge (2*,+0);
[-,out = 30, in = 150] (-2*,0) edge (2*,+0);
at (-2*,0) [stuff_fill_green, scale=0.6]d_α;
at (2*,0) [stuff_fill_red, scale=0.6]d_β;
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = -150, in = 150, looseness = 10] (-2*,-0.2*) edge (-2*,+0.2*);
[-,out = -30, in = 30, looseness = 10] (2*,-0.2*) edge (2*,+0.2*);
[-,out = 180, in = 0] (-2*,0) edge (2*,+0);
at (-2*,0) [stuff_fill_green, scale=0.6]d_α;
at (2*,0) [stuff_fill_red, scale=0.6]d_β;
[scale=0.6, baseline=(current bounding box.center)]
2.0;
[-,out = -90, in = -90, looseness = 8] (-0.1*,-0.1*) edge (0.1*,-0.1*);
[-,out = 90, in =90, looseness = 8] (-0.1*,0.1*) edge (0.1*,0.1*);
[-,out = 180, in = 0] (-2*,0) edge (0,+0);
at (-2*,0) [stuff_fill_green, scale=0.6]d_α;
at (0,0) [stuff_fill_red, scale=0.6]d;
[scale=0.6, baseline=(current bounding box.center)]
1.0;
(0,0) – (-2*,-);
(0,0) – (2*,-);
[-,out = 20, in = 160] (-2*,-) edge (2*,-);
[-,out = -20, in = -160] (-2*,-) edge (2*,-);
at (0,0) [stuff_fill_green, scale=0.6]d_α;
at (-2*,-) [stuff_fill_red, scale=0.6]d_β;
at (+2*,-) [stuff_fill_red, scale=0.6]d_γ;
[thick, decorate, decoration = calligraphic brace] (2.6*,-0.5-) – (-2.6*,-0.5-) node[pos=0.5*,below=6pt]G_10;
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = 40, in = 140, looseness = 10] (2.15*,0.2*) edge (1.85*,+0.2*);
[-,out = 40, in = 140, looseness = 10] (6.15*,0.2*) edge (5.85*,+0.2*);
[-,out = 0, in = 180] (-2*,0) edge (6*,+0);
at (-2*,0) [stuff_fill_green, scale=0.6]d_α;
at (2*,0) [stuff_fill_red, scale=0.6]d_β;
at (6*,0) [stuff_fill_red, scale=0.6]d_γ;
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = 40, in = 140, looseness = 10] (-1.85*,0.2*) edge (-2.15*,+0.2*);
[-,out = 40, in = 140, looseness = 10] (6.15*,0.2*) edge (5.85*,+0.2*);
[-,out = 0, in = 180] (-2*,0) edge (6*,+0);
[-,out = 90, in = -90] (2*,0) edge (2*,2*);
at (-2*,0) [stuff_fill_red, scale=0.6]d_α;
at (2*,0) [stuff_fill_red, scale=0.6]d_β;
at (2*,2*) [stuff_fill_green, scale=0.6]d_δ;
at (6*,0) [stuff_fill_red, scale=0.6]d_γ;
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = -30, in = -150] (-2*,0) edge (6*,0);
[-,out = -60, in = -120] (-2*,0) edge (6*,0);
[-,out = 0, in = 180] (-2*,0) edge (6*,+0);
[-,out = 90, in = -90] (2*,0) edge (2*,2*);
at (-2*,0) [stuff_fill_red, scale=0.6]d_α;
at (2*,0) [stuff_fill_red, scale=0.6]d_β;
at (2*,2*) [stuff_fill_green, scale=0.6]d_δ;
at (6*,0) [stuff_fill_red, scale=0.6]d_γ;
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = -30, in = -150] (-2*,0) edge (6*,0);
[-,out = 30, in = 150] (2*,0) edge (6*,0);
[-,out = 0, in = 180] (-6*,0) edge (6*,+0);
at (-2*,0) [stuff_fill_red, scale=0.6]d_α;
at (2*,0) [stuff_fill_red, scale=0.6]d_β;
at (-6*,0) [stuff_fill_green, scale=0.6]d_δ;
at (6*,0) [stuff_fill_red, scale=0.6]d_γ;
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = -30, in = -150] (-2*,0) edge (2*,0);
[-,out = 30, in = 150] (-2*,0) edge (2*,0);
[-,out = 0, in = 180] (-6*,0) edge (-2*,+0);
[-,out = 0, in = 180] (2*,0) edge (6*,+0);
[-,out = -30, in = 30, looseness = 10] (6*,-0.2*) edge (6*,+0.2*);
at (-2*,0) [stuff_fill_red, scale=0.6]d_α;
at (2*,0) [stuff_fill_red, scale=0.6]d_β;
at (-6*,0) [stuff_fill_green, scale=0.6]d_δ;
at (6*,0) [stuff_fill_red, scale=0.6]d_γ;
A simple computer scan shows that only G_6, G_7, G_8, G_9, G_10 appear in our QSM applications. We focus on the degrees that matter for the QSM applications and show that each configuration is jumping. For this, we construct a line bundle on the terminal graph whose number of global sections is strictly larger than the lower bound given the sum of the number of local sections minus the number of edges. The detailed arguments are given in <ref>.
§ IMPLICATIONS FOR F-THEORY QSMS
§.§ Improved statistics for “rational” QSMs
We now employ the refined simplification techniques to recompute the number of limit root bundles for the 23 QSMs whose canonical nodal quark-doublet curve consists of rational curves only. In order to fully appreciate the changes, we compare the refined results computed in this work with the results obtained with the old techniques <cit.>. We list the corresponding numbers in <ref>.
We must first recall how to read these tables. First, look at the line concerning Δ_134^∘, which lists the information for the F-theory QSM family B_3( Δ_4^∘ ). On the smooth, irreducible quark-doublet curve C_(3,2)_1/6 in this family, there are 12^8 root bundles of interest. We estimated their number of global sections from those of the 12^8 corresponding limit roots on the canonical nodal quark-doublet curve C^∙_(3,2)_1/6 in this family. In <cit.>, we found that (roughly) 99.9952% of these 12^8 limit root bundles have exactly three global sections. Consequently, there are (roughly) 0.0048% of the limit roots left. Our techniques in <cit.> would only bound the number of global sections of these 0.0048% of the limit root bundles from below by three. Consequently, this fraction of the limit roots is listed in the column labeled as ≥ 3.
At the bottom of <ref>, we list the refined results computed in this work. These numbers are rounded to four decimal places. Notice that the error margins are always smaller than 0.23%, which is a stark improvement in comparison to the results in <cit.> or even <cit.>.
By repeating the analysis in <cit.>, one finds that for Δ_134^∘, there is a unique setup that could not be sorted algorithmically. This configuration has the degree of the canonical bundle. By observing that the canonical bundle solves the root bundle constraint in question,
<cit.> concludes that this configuration does indeed correspond to the canonical bundle and admits four global sections. With the exception of Δ_8^∘, whose canonical nodal quark-doublet curve admits an elliptic component, one can repeat this argument for all remaining setups with K_B_3^3 = 6. This leads to the results listed in <ref>.
Crucially, note that this argument cannot be repeated for setups with K_B_3^3 = 10; essentially because we then have to solve a root bundle constraint which does not admit the canonical bundle as solution. Still, our results are optimal in a certain sense. The list of terminal graphs for which our improved computer scan <cit.> could only compute a lower bound on the number of global sections is as follows:
[scale=0.6, baseline=(current bounding box.center)]
2.0;
[-,out = -30, in = 30, looseness = 5] (-2*,-0.2*) edge (-2*,+0.2*);
at (-2*,0) [stuff_fill_red, scale=0.6]0;
[scale=0.6, baseline=(current bounding box.center)]
2.0;
[-,out = -30, in = 30, looseness = 3] (-1.9*,-0.2*) edge (-1.9*,+0.2*);
[-,out = -150, in = 150, looseness = 3] (-2.1*,-0.2*) edge (-2.1*,+0.2*);
at (-2*,0) [stuff_fill_red, scale=0.6]2;
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = 45, in = 135] (-2*,0) edge (+2*,0);
[-,out = 0, in = 180] (-2*,0) edge (+2*,0);
[-,out = -45, in = -135] (-2*,0) edge (+2*,0);
at (-2*,0) [stuff_fill_red, scale=0.6]1;
at (+2*,0) [stuff_fill_red, scale=0.6]1;
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = 45, in = 135] (-2*,0) edge (+2*,0);
[-,out = 20, in = 160] (-2*,0) edge (+2*,0);
[-,out = -20, in = -160] (-2*,0) edge (+2*,0);
[-,out = -45, in = -135] (-2*,0) edge (+2*,0);
at (-2*,0) [stuff_fill_red, scale=0.6]2;
at (+2*,0) [stuff_fill_red, scale=0.6]2;
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = 45, in = 135] (-2*,0) edge (2*,0);
[-,out = -45, in = -135] (-2*,0) edge (2*,0);
[-,out = 0, in = 180] (2*,0) edge (6*,0);
[-,out = 45, in = 135] (6*,0) edge (10,0);
[-,out = -45, in = -135] (6*,0) edge (10,0);
[-,out = 90, in = 90, looseness = 0.4] (-2*,0) edge (10,0);
at (-2*,0) [stuff_fill_red, scale=0.6]1;
at (+2*,0) [stuff_fill_red, scale=0.6]1;
at (+6*,0) [stuff_fill_red, scale=0.6]1;
at (+10,0) [stuff_fill_red, scale=0.6]1;
We argued in <ref>, that all of these terminal circuits are jumping. In other words, the number of global sections depends on the descent data. As our computer scan <cit.> is (at least of now) not being provided with that information, it derives the best possible answer based on the combinatoric data provided.
§.§ Improved statistics for “elliptic” QSMs
Our advanced techniques help to improve the results for the QSM with a single elliptic curve. Since most of the remaining ignorance in the previous work <cit.> originates from trees with an elliptic component. So we expect a massive improvement. As an example, let us look at Δ^∘_88. The Brill-Noether table obtained from application of our old techniques in <cit.> and its refined counterpart based on the techniques presented in this work are displayed in <ref>.
We first recall the notation in this table. We have analyzed 20^12 limit root bundles on the canonical quark-doublet curve C^∙_(3,2)_1/6 in the F-theory QSM-family B_3( Δ_88^∘ ). Those limit roots are described by line bundles on a (possibly nodal) curve that is obtained from blowing up 0 ≤ N ≤ 6 nodes of C^∙_(3,2)_1/6. For fixed N, only a fraction of the 20^12 limit roots are obtained. We have enumerated them all and made an effort to compute their number of global sections. For instance, in <cit.>, we found that for N = 0, there are 781680888 limit root bundles with exactly three global sections.[More precisely, there are actually 781680888× 20^5 such limit roots. However, all counts in this table are multiples of the geometric multiplicity μ = 20^5. For ease of presentation and in agreement with <cit.>, we omit the factor 20^5.] You find this number at the top-left of <ref>. Similarly, we encountered 62712 limit root bundles for N = 0, for which we could only compute a lower bound on the number of global sections in <cit.>. Namely, we found that the number of global sections is bounded below by three. Hence, you find this number 62712 at the top row with N = 0 and column with ≥ 3. We regard this number of limit root bundles for which we could not uniquely determine the number of global sections as an ignorance to our analysis. Finally, the row beginning by Σ lists the sum of all limit root bundles with 0 ≤ N ≤ 6 and certain (lower bound on the) number of global sections. For instance, there is a total of 31563200 limit root bundles with exactly four global sections in <cit.>.
Let us now compare this Brill-Noether table from <cit.> with the results obtained from our refined techniques. This improved answer is listed at the bottom of <ref>. Hence, with the techniques in <cit.> we found the lower bound three to the number of global sections of 282996720 limit root bundles, whereas this happened for merely 7713920 limit roots once our advanced techniques were employed. This is a drastic reduction of ignorance! It is enlightening to express the two Σ-rows in percentages of the 20^12 limit root bundles:
= 3 ≥ 3 = 4 ≥ 4 = 5 ≥ 5
Σ in <cit.> 74.8970 22.1091 2.4659 0.5191 0.0083 0.0006
Σ in current work 96.4034 0.6027 2.9784 0.0066 0.0089
So the total ignorance dropped from about 22.7% to about 0.7%. Clearly, this is a stark improvement! Indeed, this finding applies more generally to the 10 families of elliptic F-theory QSMs that we analyze in this work. We provide the comparison among the results from <cit.> and the refined answers computed in this work in <ref>. While we definitely reduced the ignorances significantly, let us emphasize that these results are optimal in a certain sense . Namely, we can easily list all setups for which our computer algorithm could only compute a lower bound on the number of global sections. Then, we can analyze what it would take to turn these lower bounds into exact counts of global sections. For instance, the remaining computation for Δ_8^∘ concerns (a family of) line bundles of the canonical degree. In fact, we could argue implicitly that these must be the canonical bundle <cit.>. However, purely based on the combinatorics data that the computer uses, it is impossible to arrive at that conclusion. Rather, the descent data of the limit root bundles is needed. Hence, based on the information provided by the current computer scan, this result cannot be improved. Likewise, for the remaining QSM geometries in the above table, the remaining computations concern one or multiple of the following configurations:
[scale=0.6, baseline=(current bounding box.center)]
2.0;
at (-2*,0) [stuff_fill_green, scale=0.6]d = 0;
[scale=0.6, baseline=(current bounding box.center)]
2.0;
[-,out = -30, in = 30, looseness = 5] (-2*,-0.2*) edge (-2*,+0.2*);
at (-2*,0) [stuff_fill_red, scale=0.6]d = 0;
[scale=0.6, baseline=(current bounding box.center)]
2.0;
[-,out = -30, in = 30, looseness = 5] (-2*,-0.2*) edge (-2*,+0.2*);
at (-2*,0) [stuff_fill_green, scale=0.6]d;
(d ∈{2,3,4})
[scale=0.6, baseline=(current bounding box.center)]
2.0;
[-,out = 0, in = 180] (-3*,0) edge (-2*,0);
[-,out = -30, in = 30, looseness = 5] (-2*,-0.2*) edge (-2*,+0.2*);
at (-3*,0) [stuff_fill_green, scale=0.6]d;
at (-2*,0) [stuff_fill_red, scale=0.6]1;
(d ∈{1,2,3})
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = 45, in = 135] (-2*,0) edge (+2*,0);
[-,out = 0, in = 180] (-2*,0) edge (+2*,0);
[-,out = -45, in = -135] (-2*,0) edge (+2*,0);
at (-2*,0) [stuff_fill_red, scale=0.6]1;
at (+2*,0) [stuff_fill_red, scale=0.6]1;
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = 45, in = 135] (-2*,0) edge (+2*,0);
[-,out = 0, in = 180] (-2*,0) edge (+2*,0);
[-,out = -45, in = -135] (-2*,0) edge (+2*,0);
at (-2*,0) [stuff_fill_green, scale=0.6]d;
at (+2*,0) [stuff_fill_red, scale=0.6]1;
(d ∈{3,4})
[scale=0.6, baseline=(current bounding box.center)]
1.0;
(0,0) – (-2*,-2*);
(0,0) – (2*,-2*);
[-,out = 20, in = 160] (-2*,-2*) edge (2*,-2*);
[-,out = -20, in = -160] (-2*,-2*) edge (2*,-2*);
at (0,0) [stuff_fill_green, scale=0.6]2;
at (-2*,-2*) [stuff_fill_red, scale=0.6]1;
at (+2*,-2*) [stuff_fill_red, scale=0.6]1;
All of these are jumping terminal graphs, which were derived in <ref> and <ref>. To tell the number of global sections precisely, one needs the descent data and/or the exact line bundle divisor on the elliptic curve. So based on the currently available combinatoric data for our computer scan, the above results are fully optimized.
§.§ Absence of vector-like quark-doublets
Let us summarize our findings regarding the analyzed 33 families of F-theory QSMs:
Polytope K_B_3^3 P_(3) N_roots N_FRST N_FRST
Δ_8^∘ 6 99.9421 12^8 3.867 × 10^13 2.828 × 10^16
Δ_4^∘ 6 99.9952 12^8 3.188 × 10^11
Δ_134^∘ 6 99.9952 12^8 7.538 × 10^10
Δ_128^∘, Δ_130^∘, Δ_136^∘, Δ_236^∘ 6 99.9952 12^8 3.217 × 10^11
Δ_88^∘ 10 96.4034 20^12 5.231 × 10^10 1.246 × 10^14
Δ_110^∘ 10 95.5794 20^12 5.239 × 10^8
Δ_272^∘, Δ_274^∘ 10 95.3336 20^12 3.212 × 10^12 2.481 × 10^15
Δ_387^∘ 10 95.1545 20^12 6.322 × 10^10 6.790 × 10^12
Δ_798^∘, Δ_808^∘, Δ_810^∘, Δ_812^∘ 10 93.8212 20^12 1.672 × 10^10 2.515 × 10^13
Δ_254^∘ 10 96.3942 20^12 1.568 × 10^10
Δ_52^∘ 10 96.0587 20^12 1.248 × 10^8
Δ_302^∘ 10 96.3960 20^12 5.750 × 10^7
Δ_786^∘ 10 95.0714 20^12 9.810 × 10^8
Δ_762^∘ 10 95.0167 20^12 1.087 × 10^11 2.854 × 10^13
Δ_417^∘ 10 95.0745 20^12 1.603 × 10^9
Δ_838^∘ 10 94.9092 20^12 4.461 × 10^9
Δ_782^∘ 10 94.9019 20^12 3.684 × 10^9
Δ_377^∘, Δ_499^∘, Δ_503^∘ 10 93.6500 20^12 4.461 × 10^9
Δ_1348^∘ 10 93.7075 20^12 4.285 × 10^9
Δ_882^∘, Δ_856^∘ 10 93.6546 20^12 3.180 × 10^9
Δ_1340^∘ 10 92.2989 20^12 4.496 × 10^9
Δ_1879^∘ 10 92.3015 20^12 4.461 × 10^9
Δ_1384^∘ 10 90.8524 20^12 7.040 × 10^9
Recall that these geometries are one-to-one to fine, regular, star triangulations (FRSTs) of reflexive 3-dimensional polytopes. Some of these polytopes are very complicated and the number of triangulations can only be estimated <cit.>. In the last two columns, we have listed the known lower and upper bounds to the number of FRSTs. If the exact number of FRSTs is known, we have not listed an upper bound N_FRST as we hope that this helps to read this table.
Let us analyze these numbers. A natural question is to consider the union of all 33 families of F-theory QSMs and then to ask how likely it is to find no vector-like exotics on the quark-doublet curve. In other words, if we throw a dart at the board of all these QSM families, how likely it is that we hit a configuration without vector-like exotics on the quark-doublet curve? To this end, we focus on the quantity
P = ∑_i ∈ IP^(i)_(3)· N^(i)_roots· N^(i)_FRST/∑_i ∈ IN^(i)_roots· N^(i)_FRST ,
where I is the indexing set of the polytopes in the above table. The complicating fact is that we cannot uniquely tell the number of FRSTs for 10 polytopes. Rather, for these polytopes we only know a lower and upper bound to the number of triangulations <cit.>. So what we do, is we minimize this quantity in the domain of the admissible triangulations. This task can easily be completed e.g. with <cit.> [One can also work out this result with the Karush–Kuhn–Tucker conditions <cit.>, which generalize the method of Lagrange multiplier. The key step is to employ the complementary slackness conditions.]. We find that the likeliness for exactly three global sections on the quark doublet curve is at least 93.91%. This minimum percentage is attained by maximizing the number of FRSTs for Δ^∘_798 and minimizing the number of all other FRSTs.
In principle one could be tempted to repeat this analysis to also find an upper bound. Such a naive approach would use the sum of the percentage of roots with exactly three sections and that of roots for which we could only bound the number of global sections below by three. The outcome of this computation is then 96.96% – attained by maximizing the number of FRSTs for Δ^∘_8, Δ^∘_88 and minimizing the number of all other FRSTs. Note however, that we actually want to count global sections of root bundles on the physical quark-doublet curve and not the nodal curves, for which we execute the computations presented in this work. Along the smoothing C^∙_(3,2)_1/6→ C_(3,2)_1/6 we may lose vector-like pairs/global sections. Indeed, we cannot rule out the possibility that all root bundles may drop down to having exactly three global sections after this smoothing. This effect is not taken into account by the naive upper bound 96.96%, which must therefore be discarded.
A complementary answer is obtained by estimating the chances for exactly one vector-like exotic on the nodal quark-doublet curve C^∙_(3,2)_1/6. It is important to recall that we want to estimate this number on the original smooth matter curves. Due to the deformation process from the nodal to the smooth, irreducible curves, we can merely provide an upper bound from our estimates on the nodal curve. The latter is readily obtained with e.g. <cit.>, namely the chances for this are at most 5.17%. This maximal value is obtained for a maximal number of FRSTs for Δ_798^∘ and minimal number of FRSTs for all other polytopes.
§ SUMMARY AND OUTLOOK
The geometries of the Quadrillion F-theory Standard Models (QSMs) <cit.> are defined by toric 3-folds, which arise from full, regular, star triangulations of certain polytopes in the Kreuzer-Skarke list of 3-dimensional reflexive polytopes <cit.>. The vector-like spectra in QSM geometries are given by the cohomologies of certain roots P_𝐑 (twists of certain powers of) the canonical bundle on the matter curve C_𝐑 <cit.>. These roots are not unique, and it is unclear which ones come from F-theory gauge potentials in the Deligne cohomology. Investigating this matter is currently beyond our abilities, so <cit.> analyzed all mathematically possible root bundles, including those that could be induced from the G_4-flux. They conducted a local and bottom-up analysis, dealing with one matter curve at a time and neglecting any correlations among the vector-like spectra on different matter curves.
Counting root bundles on C_𝐑 becomes easier when the matter curve is deformed into a nodal curve. Constructing root bundles on smooth curves is challenging, but on nodal curves, it amounts to using the combinatorics of limit roots <cit.>. Additionally, for each family B_3(Δ^∘) of F-theory QSMs, there is a canonical nodal curve C_𝐑^∙ that depends solely on Δ^∘ <cit.>. Therefore, the vector-like spectra for the entire family B_3(Δ^∘) can be estimated with a single calculation by studying the limit roots on this nodal curve C_𝐑^∙.
After deformation, we turn to counting limit root bundles with a specific number of global sections. This was the main goal of <cit.>, which focused on the family B_3(Δ_4^∘) of 𝒪(10^11) F-theory QSMs. The majority of the limit roots were defined over tree-like rational curves, and their line bundle cohomologies were computed using an algorithm that simplified a line bundle on a tree-like nodal curve without changing the number of global sections. The remaining limit roots were defined over circuit-like curves, and their cohomologies were counted manually. Fortunately, there were only a few of them, namely five for the B_3(Δ_4^∘) family, because the involved nodal curves had relatively small Betti numbers and high root indices. On four of the curves, the line bundles were found to have exactly three sections. On the remaining curve, labeled as a jumping circuit in <cit.>, the line bundle was identified as the canonical bundle, which has as many sections as the arithmetic genus, namely four in this case. This is why at least 99.995% of limit root bundles on the nodal curves C_(3, 2)_1/6^∙, C_(3, 1)_-2/3^∙, and C_(1, 1)_1^∙ have exactly three global sections within the F-theory QSMs of the family B_3(Δ_4^∘).
This paper seeks to upgrade the results of <cit.> in two ways. First, it develops a systematic algorithm that determines line bundle cohomologies on circuit-like curves with rational and elliptic components. Previously, this was done manually in <cit.>. We present two techniques that simplify the dual graphs of nodal curves: the removal of subtrees in <ref> and the removal of interior edges in <ref>. These simplifications result in a finite number of so-called terminal circuits, which are graphs that cannot be further reduced. The terminal graphs were classified in <ref>, and their line bundle cohomologies were examined case-by-case. The relevant computations can be found in <ref> as well as <ref>, <ref>. Like in <cit.>, some circuits were found to be jumping depending on the descent data, and only a lower bound on the number of global sections can be obtained in these cases.
With this established algorithm, this paper seeks to move beyond the family B_3(Δ_4^∘) and widen the scope of analysis. The simplification algorithm was applied to 33 distinct QSM families to assess the probability of no vector-like exotics. The improved statistics can be found in <ref>, where <ref> and <ref> compare our current findings with those obtained using old techniques from <cit.>. As expected, this algorithm improved our ability to conclusively determine line bundle cohomology and helped decrease the statistical ignorance from previous papers. For instance, the total ignorance for elliptic circuits within the family B_3( Δ_88^∘ ) dropped from 22.7% to 0.7%, a massive upgrade. These results are optimal in the sense that pure combinatorics alone will not offer any more improvement. Further progress requires deeper knowledge of the descent data and the line bundle divisor on elliptic curves.
In <ref>, we discovered that the probability of having precisely three global sections on the quark doublet curve is at least 93.91%. Although this value is lower than the previous probability of 99.995% mentioned in <cit.>, there is a crucial distinction. In our analysis, we encompass multiple QSM families, whereas <cit.> only examined a single QSM family. Hence, our current estimation of 93.91% provides a more comprehensive representation of the likelihood of lacking vector-like exotics on the quark-doublet curve across the majority of QSM setups. It is important to emphasize that 93.91% serves as a lower bound. For certain QSM families, the exact number of elements remains unknown. Stated differently, for some of the 3-dimensional reflexive polytopes in the Kreuzer-Skarke list <cit.> the exact number of fine regular star triangulations is unknown. Only lower and upper bounds for the number of fine regular star triangulations are known <cit.>. We minimized the absence of vector-like exotics within the domain formed from these lower and upper bounds on triangulations. The stated minimum of 93.91% is attained for minimal number of FRSTs for polytopes with a high probability of lacking vector-like exotics and maximized the number of FRSTs for polytopes with a (relatively) low probability of lacking vector-like exotics. The latter specifically pertains to the B_3( Δ_272^∘ ) and B_3( Δ_274^∘ ) families.
It should be noted that only 33 families of QSMs were studied, leaving 675 families remaining. While each comes with only few triangulations, the number of roots grows exponentially. The number of roots is given by
(2 K_B_3^3)^2g , g = 2 + K_B_3^3/2 the genus of the quark-doublet curve .
Hence, the cases K_B_3^3 = 18 and K_B_3^3 = 30, which have not been covered in this paper, come with 36^10 and 60^16 roots each. These numbers are huge compared to the setups in our analysis. However, since these cases did not our statistically rooted top-down condition, h^2,1(Y_4) ≥ g where g is the genus of C_(3,2)_1/6 <cit.>, they were discarded.[4 setups were discarded even though they satisfied h^2,1(Y_4) ≥ g, but their canonical nodal quark doublet curve C^∙_(3,2)_1/6 have an irreducible component of genus strictly larger than 1.] Nevertheless, based on the number of (root bundle) configurations that they provide and the fact that large K^3 corresponds to small h^1,1( Y_4 ), which is needed for "realistic volumes" of the gauge divisors V(s_3), V(s_9) in stretched Kaehler cones <cit.>, one should study these setups eventually.
This work also does not address the fundamental question of understanding which root bundles are induced by F-theory gauge potentials in the Deligne cohomology. We refer the reader to <cit.> to see how the G_4-flux quantization condition discussed in <cit.> gives rise to root bundles. One possible scenario is for all physically allowed gauge potentials to induce only a subset of limit roots on one matter curve. Another scenario is for the fluxes to induce a specific root on one matter curve but induce a few roots on another. To study this complex question, one can begin by setting G_4 = 0 and studying the origins of the root bundles in Deligne cohomology. In this setting, the root bundles are spin bundles, and so we seek to answer which spin bundles on the matter curve are compatible with the geometry of the F-theory compactification. For instance, an important condition from the physics is the cancellation of the Freed-Witten anomaly <cit.>. A complete answer to all of this requires detailed investigation, similar to the steps outlined in <cit.>. One could also investigate the relationship between these spin bundles and the appearance of one half of the second Chern class (of the tangent bundle of Y_4) in the G_4-flux quantization condition <cit.>. Although our findings state that at least 93.91% of limit roots have exactly three global sections, we cannot get a definitive answer regarding the absence of vector-like exotics until this matter is resolved. A detailed examination of such top-down conditions is planned for future research.
We eventually want to study the absence of vector-like exotics on the Higgs curve. However, this involves overcoming some computational challenges. Due to the Higgs curve having high genus and a large number of components, we will end up with a rather complicated curve even after simplification. The root bundle constraint is also different, and we will have to analyze a much larger number of roots. Nevertheless, our newfound techniques may uncover some structure that is not apparent yet. A more general extension could focus on other Standard Model setups, such as those considered in <cit.>, and investigate these constructions with the techniques presented in this work. We reserve these studies for future work.
This paper, and in particular <ref> and <ref>, should be seen as a computational approximation to describing the Brill-Noether Theory of root bundles on nodal curves. Such a theory is underdeveloped in the mathematical literature. These arithmetic perspectives may encourage more robust theorems to be made in the future. For instance, one may gain more insight by studying the descent data underlying limit root bundles and its behavior under deformation in <cit.>. As a common technique used in Brill-Noether theory, the application of limit linear series to limit root bundles is also compelling <cit.>. It would be interesting to compare the phenomena observed in our investigation to classical Brill-Noether theory. In particular, some circuits exhibit jumping behavior, which seems to resemble Brill-Noether jumps on smooth irreducible curves.
To conclude, one potential application in machine learning is to study and detect patterns linking the graphs to the corresponding Brill-Noether approximations. In the past, for many line bundles on nodal curves with relatively simple dual graphs, our computed Brill-Noether numbers featured a high ignorance in the numbers of global sections. Thus, the data was insufficient to learn conclusive results from. Now that we have improved our computational capabilities, we anticipate that most of this ignorance can be removed and, based on this refined data, a much better trained algorithm can be obtained. This algorithm could then either be used to estimate the number of global sections on the Higgs curve or in an attempt to predict a mathematical theorem underlying the Brill-Noether theory of root bundles on nodal curves.
§.§ Acknowledgements
The computer algebra system was used for key computations in this work. The reliable computations conducted by the supercomputer allowed us to count limit root bundles. This we truly appreciate. M. B. expresses gratitude towards his colleague Tobias Metzlaff for valuable discussions. The work of M. B. was supported by the SFB-TRR 195 Symbolic Tools in Mathematics and their Application of the German Research Foundation (DFG) and the Forschunginitiative des Landes Rheinland-Pfalz through SymbTools – Symbolic Tools in Mathematics and their Application. The work of M.C. is supported by DOE Award DE-SC0013528Y and the Simons Foundation Collaboration grant #724069 on “Special Holonomy in Geometry, Analysis and Physics”. M.C. thanks the Slovenian Research Agency and the Fay R. and Eugene L. Langberg Chair for support. R.D. and M.O. are partially supported by the NSF grant DMS 2001673 and by the Simons Foundation Collaboration grant #390287 on “Homological Mirror Symmetry”. M.O. is grateful for the support by the Ph.D. Presidential Fellowship research fund.
§ JUMPS ON TERMINAL GRAPHS
§.§ Rational circuits
Consider a rational circuit Γ. By definition, this is a terminal graph whose components are all rational. In addition, by our prescription, for each rational component/vertex V_i of Γ, there is an integer d_i which encodes a line bundle L_i = 𝒪_V_i( d_i ) ≅𝒪_ℙ^1( d_i ). Altogether, this defines a family ℱ of line bundles on the nodal curve C^∙ whose dual graph is Γ, such that for every L ∈ℱ and every vertex V_i of Γ it holds
. L |_V_i = L_i = 𝒪_V_i( d_i ) ≅𝒪_ℙ^1( d_i ) .
We say that Γ is non-jumping if h^0( C^∙, L ) is the same for all L ∈ℱ. Otherwise, we say that Γ is jumping.
G_2 is jumping iff 0 ≤ d ≤ 2:
[scale=0.6, baseline=(current bounding box.center)]
2.0;
[-,out = -30, in = 30, looseness = 3] (-1.9*,-0.2*) edge (-1.9*,+0.2*);
[-,out = -150, in = 150, looseness = 3] (-2.1*,-0.2*) edge (-2.1*,+0.2*);
at (-2*,0) [stuff_fill_red, scale=0.6]d;
If d < 0, then h^0( G_2, L ) = 0 for all L ∈ℱ. Certainly, G_2 is jumping for d = 0. Namely, if the descent data is λ_1 = λ_2 = 1, then h^0( G_2, L ) = 1. Otherwise, h^0( G_2, L ) = 0. The interesting case to discuss is d > 0. We first fix notation for the vertex V_1 = ℙ^1. By an SL(2, ℂ) transformation, we position the nodes connecting the left edge at [1:0] and [0:1]. The ones for the right edge are taken as [1:1] and [p:1] where p ∈ℂ∖{0,1}. We can express the most general φ∈ H^0( V_1, . L |_V_1) as follows:
φ = ∑_i = 0^dα_i x^i y ^d-i .
We impose two gluing conditions:
α_0 = λ_1 ·α_d , ( λ_1 + 1 ) α_d + ∑_i = 1^d-1α_i = λ_2 ( λ_1 + p^d ) ·α_d + λ_2 ∑_i = 1^d-1α_i · p^i .
Equivalently, we can write these conditions as
α_0 = λ_1 ·α_d , α_d ·[ λ_1 + 1 - λ_1 λ_2 - λ_2 p^d ] = ∑_i = 1^d-1α_i ·( λ_2 p^i - 1 ) .
If d = 1, then these conditions become
α_0 = λ_1 ·α_1 , α_1 ·[ λ_1 + 1 - λ_1 λ_2 - λ_2 p ] = 0 .
Take λ_1 = 2, λ_2 = 3 and p = -1, then those conditions are satisfied for all α_1 ∈ℂ and h^0( G_2, L ) = 1. However, for generic λ_1, λ_2 ∈ℂ^∗ and p ∈ℂ∖{0,1}, we find h^0( G_2, L ) = 0. Hence, G_2 is jumping for d = 1. Next, take d = 2. Then the gluing conditions are
α_0 = λ_1 ·α_2 , α_2 ·[ λ_1 + 1 - λ_1 λ_2 - λ_2 p^2 ] = α_1 ·( λ_2 p - 1 ) .
Let us take λ_1 = 1/2, λ_2 = 2 and p = 1/2. Then, these conditions are satisfied for all α_1, α_2 ∈ℂ and so h^0( G_2, L ) = 2. However, for generic λ_1, λ_2 ∈ℂ^∗ and p ∈ℂ∖{ 0, 1 }, we have two non-trivial conditions and so h^0( G_2, L ) = 1. Hence, G_2 is jumping for d = 2. Finally, focus on d > 2. Then the gluing conditions are
α_0 = λ_1 ·α_d ,
α_d ·[ λ_1 + 1 - λ_1 λ_2 - λ_2 p^d ] = α_1 ( λ_2 p - 1 ) + α_2 ( λ_2 p^2 - 1 ) + ∑_i = 3^d-1α_i ·( λ_2 p^i - 1 ) .
Let us assume that λ_2 p - 1 ≠ 0. Then, these conditions are equivalent to
α_0 = λ_1 ·α_d ,
α_1 = α_d ·[ λ_1 + 1 - λ_1 λ_2 - λ_2 p^d ] - α_2 ( λ_2 p^2 - 1 ) - ∑_i = 3^d-1α_i ·( λ_2 p^i - 1 )/λ_2 p - 1 .
Hence, we then find h^0( G_2, L ) = d - 1. Conversely, assume that λ_2 p - 1 = 0, i.e. p = 1/λ_2. Recall that we must have p ∈ℂ∖{ 0, 1 }, so λ_2 ≠ 1. In particular 1/λ_2 - 1 ≠ 0. Under these assumptions, the above conditions becomes
α_0 = λ_1 ·α_d ,
α_d ·[ λ_1 + 1 - λ_1 λ_2 - λ_2^-d + 1] = α_2 ( 1/λ_2 - 1 ) + ∑_i = 3^d-1α_i ·( λ_2^-i + 1 - 1 ) .
Equivalently, we can write
α_0 = λ_1 ·α_d ,
α_2 = α_d ·[ λ_1 + 1 - λ_1 λ_2 - λ_2^-d + 1] - ∑_i = 3^d-1α_i ·( λ_2^-i + 1 - 1 )/1/λ_2 - 1 .
And so, again we find h^0( G_2, L ) = d - 1. Hence, h^0( G_2, L ) = d - 1 for all L ∈ℱ for d > 2, i.e. G_2 is non-jumping for d > 2.
G_3 is jumping iff 0 ≤ d_α, d_β≤ 1:
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = 45, in = 135] (-2*,0) edge (+2*,0);
[-,out = 0, in = 180] (-2*,0) edge (+2*,0);
[-,out = -45, in = -135] (-2*,0) edge (+2*,0);
at (-2*,0) [stuff_fill_red, scale=0.6]d_α;
at (+2*,0) [stuff_fill_red, scale=0.6]d_β;
First, if d_α, d_β < 0, then h^0( G_3, L ) = 0 for all L ∈ℱ. So G_3 is non-jumping for d_α, d_β < 0.
Before we proceed, let us fix some notation. We use homogeneous coordinates [x:y] for V_1 = ℙ^1_α and [u:v] for V_2 = ℙ^1_β. In terms of these, we can express the most general section φ_α∈ H^0( V_1, . L |_V_1) and φ_β∈ H^0( V_2, . L |_V_2) as follows:
φ_α = ∑_i = 0^d_αa_i x^i y^d_α -i , φ_β = ∑_i = 0^d_βb_i u^i v^d_β -i .
By an SL(2, ℂ) transformation, we can assume that the nodes are positioned at n_1 = [1:0], n_2 = [0:1] and n_3 = [1:1]. First, let us study situations with d_α < 0 and d_β≥ 0. Then the gluing conditions are as follows:
0 = λ_1 b_d_β , 0 = λ_0 b_0 , 0 = ∑_i = 0^d_βb_i .
Consequently, h^0( G_3, L ) = max{ d_β - 2, 0} for all L ∈ℱ, i.e. G_3 is non-jumping. Next, focus on d_α, d_β≥ 0. Then, the gluing conditions are as follows:
a_d_α = λ_1 b_d_β , a_0 = λ_0 b_0 , ( λ_1 - 1 ) b_d_β + ( λ_0 - 1 ) b_0 = ∑_i = 1^d_β - 1b_i - ∑_i = 1^d_α - 1a_i .
The first two conditions are never trivial. However, the third condition is trivial iff λ_0 = λ_1 = 1 and 0 ≤ d_α, d_β≤ 1. Consequently, G_3 is jumping iff 0 ≤ d_α, d_β≤ 1.
G_4 is jumping iff one of the following applies:
* d_α < 0 and d_β = 1,
* d_α = 0 and d_β≥ 0,
* d_α = 1 and d_β < 0,
* d_α≥ 0 and d_β = 0,
* d_α = d_β = 1.
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = -150, in = 150, looseness = 10] (-2*,-0.2*) edge (-2*,+0.2*);
[-,out = -30, in = 30, looseness = 10] (2*,-0.2*) edge (2*,+0.2*);
[-,out = 180, in = 0] (-2*,0) edge (2*,+0);
at (-2*,0) [stuff_fill_red, scale=0.6]d_α;
at (2*,0) [stuff_fill_red, scale=0.6]d_β;
Certainly, for d_α, d_β < 0 we have h^0( G_4, L ) = 0 for all L ∈ℱ. So for both degrees negative, G_4 is non-jumping.
We use homogeneous coordinates [x:y] on V_1 = ℙ^1_α and [u:v] on V_2 = ℙ^1_β. Fix the node positions to [1:0], [0:1] and [1:1] by an SL(2, ℂ) transformation. The node linking ℙ^1_α and ℙ^1_β has homogeneous coordinates [1:1] on both V_1 and V_2. With that being said, we can express φ_α∈ H^0 ( V_1, . L |_V_1) and φ_β∈ H^0 ( V_1, . L |_V_2) as follows:
φ_α = ∑_i = 0^d_αa_i x^i y^d_α -i , φ_β = ∑_i = 0^d_βb_i u^i v^d_β -i .
On the self-edges, we enforce descent data λ_α and λ_β. We begin by analyzing mixed signs for the line bundle degrees.
* d_α < 0 and d_β = 0:
Then, the gluing conditions are:
0 = λ_α· 0 , b_0 = λ_β· b_0 , 0 = b_0 .
So h^0( G_4, L ) = 0 for all L ∈ℱ, i.e. G_4 is non-jumping.
* d_α < 0 and d_β = 1:
Then, the gluing conditions are:
0 = λ_α· 0 , b_0 = λ_β· b_1 , 0 = b_1 ·( 1 + λ_β) .
Hence, if λ_β = -1, then h^0( G_4, L ) = 1. However, for λ_β≠ -1, we have h^0( G_4, L ) = 0. So G_4 is jumping for d_α < 0 and d_β = 1.
* d_α < 0 and d_β > 1:
Then, the gluing conditions are:
0 = λ_α· 0 , b_0 = λ_β· b_d_β , b_1 = - b_d_β·( 1 + λ_β) - ∑_i = 2^d_β - 1b_i .
So in this case, we have h^0( G_4, L ) = d_β - 1 for all L ∈ℱ and G_4 is non-jumping.
Next, we focus on situations with d_α = 0 and d_β≥ 0.
* d_α = d_β = 0:
Then the gluing conditions are as follows:
a_0 = λ_α a_0 , b_0 = λ_β b_0 , a_0 = b_0 .
For generic λ_α, λ_β∈ℂ^∗, the only solution to these equations is a_0 = b_0 = 0 and hence h^0( G_4, L ) = 0. However, if λ_α = λ_β = 1, then a_0 = b_0 solves these conditions and h^0( G_4, L ) = 1. Hence, G_4 is jumping for d_α = d_β = 0.
* d_α = 0 and d_β > 0:
Then, the gluing conditions are as follows:
a_0 = λ_α a_0 , b_0 = λ_β b_d_β , b_d_β·( 1 + λ_β) = a_0 - ∑_i = 1^d_β - 1b_i .
For generic λ_α and λ_β, we must have a_0 = 0 and 1 + λ_β≠ 0. This then means that the only solution to these conditions is then given by
a_0 = 0 , b_0 = λ_β b_d_β , b_d_β = - ∑_i = 1^d_β - 1b_i/1 + λ_β .
So then h^0( G_4, L ) = d_β - 1. However, if we assume that λ_α = 1 and λ_β≠ 1, then any a_0 ∈ℂ solves the first condition. In addition, we find
b_0 = λ_β b_d_β , b_d_β = a_0 - ∑_i = 1^d_β - 1b_i/1 + λ_β .
So then h^0( G_4, L ) = d_β and G_4 is jumping for d_α = 0 and d_β > 0.
It remains to study situation with d_α, d_β≥ 1.
* d_α = d_β = 1:
Then the gluing conditions are
a_0 = λ_α a_1 , b_0 = λ_β b_1 , a_1 ( 1 + λ_α) = b_1 ( 1 + λ_β) .
If λ_α = λ_β = -1, then the third condition is trivial and one has h^0( G_4, L ) = 2. However, if ( λ_α, λ_β) ≠( -1,-1 ), then h^0( G_4, L ) = 1. So G_4 is jumping for d_α = d_β = 1.
* d_α = 1 and d_β > 1:
Then the gluing conditions are
a_0 = λ_α a_1 , b_0 = λ_β b_d_β , a_1 ( 1 + λ_α) = b_d_β( 1 + λ_β) + b_1 + ∑_i = 2^d_β - 1b_i .
Hence, h^0( G_4, L ) = d_α + d_β - 1 for all L ∈ℱ and G_4 is non-jumping.
* d_α, d_β > 1:
Then the gluing conditions are
a_0 = λ_α a_d_α , b_0 = λ_β b_d_β , a_d_α( 1 + λ_α) + a_1 + ∑_i = 2^d_α - 1a_i = b_d_β( 1 + λ_β) + ∑_i = 1^d_β - 1b_i .
Hence, h^0( G_4, L ) = d_α + d_β - 1 for all L ∈ℱ and G_4 is non-jumping.
This completes the argument.
G_5 is jumping iff 0 ≤ d_α, d_β≤ 2:
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = 45, in = 135] (-2*,0) edge (+2*,0);
[-,out = 20, in = 160] (-2*,0) edge (+2*,0);
[-,out = -20, in = -160] (-2*,0) edge (+2*,0);
[-,out = -45, in = -135] (-2*,0) edge (+2*,0);
at (-2*,0) [stuff_fill_red, scale=0.6]d_α;
at (+2*,0) [stuff_fill_red, scale=0.6]d_β;
Certainly, for d_α, d_β < 0, G_5 is non-jumping. Before we proceed, let us fix notation. For V_1 = ℙ^1_α we use the homogeneous coordinates [x:y] and for V_2 = ℙ^1_β we use the homogeneous coordinates [u:v]. Thereby, we can write φ_α∈ H^0( V_1, . L |_V_1 ) and φ_β∈ H^0( V_2, . L |_V_2 ) as follows:
φ_α = ∑_i = 0^d_αa_i x^i y^d_α -i , φ_β = ∑_i = 0^d_βb_i u^i v^d_β -i .
By an SL(2, ℂ) transformation, we position the first three nodes at n_1 = [1:0], n_2 = [0:1] and n_3 = [1:1] – each with descent data λ_1, λ_0 and λ_2. The position of the fourth node is not fixed by an SL(2, ℂ) transformation. We parametrize the position of this forth node on V_1 = ℙ^1_α as [A:1] and on V_2 = ℙ^1_β as [B:1], where A, B ∈ℂ∖{ 0, 1 }.
Let us begin by looking at d_α < 0 and d_β≥ 0. Then the gluing conditions are as follows:
0 = λ_1 · b_d_β , 0 = λ_0 · b_0 , 0 = λ_2 ·∑_i = 1^d_β - 1b_i , 0 = ∑_i = 1^d_β - 1b_i · B^i .
So then h^0( G_5, L ) = max{ 0, d_β - 3 } for all L ∈ℱ, i.e. G_5 is non-jumping. It remains to discuss d_α, d_β≥ 0. We do so by discussing several cases:
* d_α = d_β = 0:
Then the gluing conditions are as follows:
a_0 = λ_1 · b_0 , a_0 = λ_0 · b_0 , a_0 = λ_2 · b_2 , a_0 = b_0 .
So h^0( G_5, L ) = 1 iff λ_0 = λ_1 = λ_2 = 1 and otherwise h^0( G_5, L ) = 0. Hence, G_5 is jumping for d_α = d_β = 0.
* d_α = 0 and d_β = 1:
Then the gluing conditions are as follows:
a_0 = λ_1 · b_1 , a_0 = λ_0 · b_0 , a_0 = λ_2 ·( b_0 + b_1 ) , a_0 = b_0 + b_1 · B .
Equivalently, we have
b_1 = a_0/λ_1 , b_0 = a_0/λ_0 , 0 = a_0 ·( λ_2/λ_0 + λ_2/λ_1 - 1 ) , 0 = a_0 ·( 1/λ_0 + B/λ_1 - 1 ) .
Take B = 2, λ_0 = 2, λ_1 = 4 and λ_2 = 4/3. Then the last two conditions are trivial and h^0( G_5, L ) = 1. However, for generic B, λ_0, λ_1 and λ_2, we have h^0( G_5, L ) = 0. So G_5 is jumping for d_α = 0 and d_β = 1.
* d_α = d_β = 1:
Then the gluing conditions are as follows:
a_1 = λ_1 · b_1 , a_0 = λ_0 · b_0 , a_0 + a_1 = λ_2 ·( b_0 + b_1 ) , a_0 + a_1 A = b_0 + b_1 B .
Equivalently, we have
b_1 = a_1/λ_1 , b_0 = a_0/λ_0 , 0 = a_0 ·( λ_2/λ_0 - 1 ) + a_1 ·( λ_2 /λ_1 - 1 ) ,
0 = a_0 ·( 1/λ_0 - 1 ) + a_1 ·( B/λ_1 - A ) .
Take λ_0 = λ_1 = λ_2 = 1 and A = B = 2. Then the last two conditions are trivial and h^0( G_5, L ) = 2. But for generic λ_0, λ_1, λ_2, A and B, we have h^0( G_5, L ) < 2. So G_5 is jumping for d_α = d_β = 1.
* d_α = 0 and d_β = 2:
Then, the gluing conditions become
a_0 = λ_1 · b_2 , a_0 = λ_0 · b_0 , b_0 ·( λ_0 - λ_2 ) + b_2 ·( λ_1 - λ_2 ) = λ_2 · b_1 ,
b_0 ( λ_0 - 1 ) + b_2 ( λ_1 - B^2 ) = B · b_1 .
Equivalently, we can write
b_0 = a_0/λ_0 , b_2 = a_0/λ_1 , b_1 = a_0 ·( 2/λ_2 - 1/λ_0 - 1/λ_1) ,
0 = a_0 ·( 2B/λ_2 + 1-B/λ_0 + B ( B - 1 )/λ_1 - 2 ) .
Take B = λ_2 = 1. Then these conditions are satisfied for all α_0 ∈ℂ and so h^0( G_5, L ) = 1. However, for generic λ_0, λ_1, λ_2 and B, the last condition will only be satisfied when α_0 = 0 and so, h^0( G_5, L ) = 0. Consequently, G_5 is jumping for d_α = 0 and d_β = 2.
* d_α = 1 and d_β = 2:
Then, the gluing conditions are
a_1 = λ_1 · b_2 , a_0 = λ_0 · b_0 , b_0 ·( λ_0 - λ_2 ) + b_2 ·( λ_1 - λ_2 ) = λ_2 · b_1 ,
b_0 ( λ_0 - 1 ) + b_2 ( λ_1 A - B^2 ) = B · b_1 .
Equivalently, we have
a_1 = λ_1 · b_2 , a_0 = λ_0 · b_0 ,
b_1 = b_0 ·( λ_0 - λ_2 ) + b_2 ·( λ_1 - λ_2 )/λ_2 ,
0 = b_0 [ λ_2 ( λ_0 - 1 ) + B ( λ_2 - λ_0 ) ] + b_2 [ A λ_1 λ_2 + B ( λ_2 - λ_1 ) - B^2 λ_2 ] .
Take λ_0 = λ_1 = λ_2 = 1, A = 4 and B = 2. Then the last condition is trivially satisfied for all b_0, b_2 ∈ℂ. And so h^0( G_5, L ) = 2. However, for generic λ_0, λ_1, λ_2, A and B we find h^0( G_5, L ) = 1 and so G_5 is jumping for d_α = 1 and d_β = 2.
* d_α = d_β = 2:
Then, the gluing conditions are
a_2 = λ_1 · b_2 , a_0 = λ_0 · b_0 , b_0 ·( λ_0 - λ_2 ) + b_2 ·( λ_1 - λ_2 ) = λ_2 · b_1 - a_1 ,
b_0 ( λ_0 - 1 ) + b_2 ( λ_1 A^2 - B^2 ) = b_1 B - a_1 A .
Equivalently, we can write
a_2 = λ_1 · b_2 , a_0 = λ_0 · b_0 ,
a_1 = λ_2 · b_1 - b_0 ·( λ_0 - λ_2 ) - b_2 ·( λ_1 - λ_2 ) ,
0 = b_0 ·[ A ( λ_2 - λ_0 ) + λ_0 - 1 ] + b_1 ·[ A λ_2 - B ] + b_2 ·[ λ_1 A^2 - B^2 + A ( λ_2 - λ_1 ) ] .
Take λ_0 = λ_1 = λ_2 = 1 and A = B = 2. Then the last condition is trivial and we have h^0( G_5, L ) = 3. However, for generic λ_0, λ_1, λ_2, A and B, the last condition is non-trivial and we have h^0( G_5, L ) = 2. Hence, G_5 is jumping for d_α = d_β = 2.
* d_α, d_β > 2:
Then the four conditions become
a_d_α = λ_1 · b_d_β ,
a_0 = λ_0 b_0 ,
a_1 = - b_0 ·( λ_0 - λ_2 ) - b_d_β·( λ_1 - λ_2 ) - ∑_i = 2^d_α - 1a_i + λ_2 ·∑_i = 1^d_β - 1b_i ,
= - ∑_i = 2^d_α - 1a_i + Γ( 𝐛, λ_0, λ_1, λ_2 ) ,
a_1 A + ∑_i = 2^d_α - 1a_i A^i = ∑_i = 1^d_β - 1b_i B^i - b_0 ( λ_0 - 1 ) + b_d_β( λ_1 A^d_α - B^d_β) .
By plugging the third into the fourth condition we find
∑_i = 2^d_α - 1a_i ( A^i - A ) = ∑_i = 1^d_β - 1b_i B^i - b_0 ( λ_0 - 1 ) + b_d_β( λ_1 A^d_α - B^d_β) - A ·Γ( 𝐛, λ_0, λ_1, λ_2 ) ,
= Γ_2 ( A, B, 𝐛, λ_0, λ_1, λ_2 )
Since A ∈ℂ∖{0, 1 }, this is equivalent to
a_2 = Γ_2 ( A, B, 𝐛, λ_0, λ_1, λ_2 ) - ∑_i = 3^d_α - 1a_i ( A^i - A )/A^2 - A .
Hence, G_5 is non-jumping for 2 < d_α, d_β.
This completes the argument.
§.§ Elliptic circuits
G_6 is jumping for d = 0:
[scale=0.6, baseline=(current bounding box.center)]
2.0;
at (-2*,0) [stuff_fill_green, scale=0.6]d = 0;
This is well-known for line bundles on elliptic curves.
The following configuration G_7 is jumping for d ≥ 1:
[scale=0.6, baseline=(current bounding box.center)]
2.0;
[-,out = -30, in = 30, looseness = 5] (-2*,-0.2*) edge (-2*,+0.2*);
at (-2*,0) [stuff_fill_green, scale=0.6]d;
Let us denote the nodes as p, q ∈ E. Assume that . L |_E = 𝒪_E ( (d+2) · r - p - q ) with r ∈ E - { p, q }. Then it holds
H^0( E, . L |_E ) = { f E →ℂ rational , div(f) ≥ - (d+2) · r + p + q } .
Consequently, any φ∈ H^0( E, . L |_E ) vanishes at the points p and q. Therefore, the gluing condition is trivially satisfied and we find h^0( G_7, L ) = d > d - 1.
G_8 is jumping for d ≥ 1:
[scale=0.6, baseline=(current bounding box.center)]
2.0;
[-,out = 0, in = 180] (-3*,0) edge (-2*,0);
[-,out = -30, in = 30, looseness = 5] (-2*,-0.2*) edge (-2*,+0.2*);
at (-3*,0) [stuff_fill_green, scale=0.6]d;
at (-2*,0) [stuff_fill_red, scale=0.6]1;
Choose homogeneous coordinates [x y] for the ℙ^1, descent data λ = 1 for the self-edge of this rational curve and fix the position of the end points of the node encoded by this self edge to [1 0] and [0 1] by an SL(2, ℂ ) transformation. The most general section on this ℙ^1, which satisfies the gluing condition via the self-edge is
φ( [x y ]) = αλ x + α y , α∈ℂ arbitrary but fixed.
Next, take the node connecting the ℙ^1 to the elliptic curve E to be [1 1] and the section on E to be ψ. Then the final condition to be imposed is
ψ( p ) = α·( λ + 1 ) .
Let us assume that λ = -1 and that
. L |_E = 𝒪_E ( (d+1) · r - p ) , r ∈ E - { p } .
Then ψ(p) = 0 and the condition ψ( p ) = α·( λ + 1 ) is satisfied for all α∈ℂ. Consequently, we then find h^0( G_8, L ) = d + 1 > ( d + 2 ) - 2. Hence, G_8 is jumping for d ≥ 1.
G_9 is jumping for d ≥ 1:
[scale=0.6, baseline=(current bounding box.center)]
1.0;
[-,out = 45, in = 135] (-2*,0) edge (+2*,0);
[-,out = 0, in = 180] (-2*,0) edge (+2*,0);
[-,out = -45, in = -135] (-2*,0) edge (+2*,0);
at (-2*,0) [stuff_fill_green, scale=0.6]d;
at (+2*,0) [stuff_fill_red, scale=0.6]1;
We take coordinates [x y] for the ℙ^1 and place the nodes on this curve at [1:0], [0:1] and [1:1]. We label the corresponding points on E as p_1, p_2 and p_3. Hence, the gluing conditions to be imposed are
ψ(p_1) = λ_1 α ,
ψ(p_2) = λ_2 β ,
ψ(p_3) = α + β .
Since λ_1, λ_2 ∈ℂ^∗, we always have
ψ(p_1)/λ_1 = α , ψ(p_2)/λ_2 = β .
This uniquely determines the section on the ℙ^1. It remains to satisfy the third condition with the sections on E:
ψ(p_3) = ψ(p_1)/λ_1 + ψ(p_2)/λ_2 .
Let us assume that
. L |_E = 𝒪_E ( (d_α+3) · r - p_1 - p_2 - p_3 ) , r ∈ E - { p_1, p_2, p_3 } .
Then for every ψ∈ H^0( E, . L |_E ) it holds ψ(p_1) = ψ(p_2) = ψ(p_3) = 0. Consequently, the condition in <ref> is trivially satisfied and we h^0( G_9, L ) = d > ( d + 2 ) - 3.
G_10 is jumping for d ≥ 1:
[scale=0.6, baseline=(current bounding box.center)]
1.0;
(0,0) – (-2*,-);
(0,0) – (2*,-);
[-,out = 20, in = 160] (-2*,-) edge (2*,-);
[-,out = -20, in = -160] (-2*,-) edge (2*,-);
at (0,0) [stuff_fill_green, scale=0.6]d;
at (-2*,-) [stuff_fill_red, scale=0.6]1;
at (+2*,-) [stuff_fill_red, scale=0.6]1;
We pick coordinates [x:y and [u:v] for the two ℙ^1s and fix the node positions to [1:0] and [0:1] respectively. Take the descent data to be λ_1 and λ_2 along these two nodes. The most general section on the left bottom ℙ^1 be
φ_a = α_1 x + α_2 y .
Then the sections on the bottom right ℙ_1 are uniquely fixed to be
φ_b = λ_1 α_1 u + λ_2 α_2 v .
The nodes of the ℙ^1 towards the elliptic curve can be taken to be [1:1]. We denote the corresponding points on E as p and q. Let the most general section on E be ψ. Then we impose the following conditions:
ψ(p) = α_1 + α_2 , ψ(q) = λ_1 α_1 + λ_2 α_2 .
Equivalently, we have
α_2 = ψ(p) - α_1 , ψ(q) - λ_2 ψ(p) = ( λ_1 - λ_2 ) α_1 .
Let us focus on λ_1 = λ_2 and
. L |_E = 𝒪_E ( (d+2) · r - p - q ) , r ∈ E - { p, q } .
Then the condition ψ(q) - λ_2 ψ(p) = ( λ_1 - λ_2 ) α_1 is trivially satisfied and we find h^0( G_10, L ) = d + 1 > ( d + 2 + 2 ) - 4. This shows that G_10 is jumping for d ≥ 1.
JHEP
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.